2026-03-10T11:19:59.688 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-10T11:19:59.696 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T11:19:59.719 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014 branch: squid description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} email: null first_in_suite: false flavor: default job_id: '1014' last_in_suite: false machine_type: vps name: kyr-2026-03-10_01:00:38-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON - CEPHADM_AGENT_DOWN log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b seed: 8043 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b targets: vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMB2kq8U/ALFBCaYezCfmrd+7bwx1dXmHS3TE+BOBJieFe5k8qM+Q4JN+AIl8ydThr5rYEjGM/p749LfqOoddY= vm07.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhT8KRzrqKoyib/Z9qVKYbpEpMMbqwrn5+3Js+2ImW/3MOJkpX194fCXnFxMp9txsPnTK+AOakr6UVAbBBfvUQ= tasks: - cephadm: cephadm_branch: v17.2.0 cephadm_git_url: https://github.com/ceph/ceph image: quay.io/ceph/ceph:v17.2.0 - cephadm.shell: env: - sha1 mon.a: - radosgw-admin realm create --rgw-realm=r --default - radosgw-admin zonegroup create --rgw-zonegroup=default --master --default - radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default - radosgw-admin period update --rgw-realm=r --commit - ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000 - ceph osd pool create foo - rbd pool init foo - ceph orch apply iscsi foo u p - sleep 180 - ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force - ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force - ceph config set global log_to_journald false --force - ceph orch ps - ceph versions - ceph -s - ceph orch ls - ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1 - ceph orch ps --refresh - sleep 180 - ceph orch ps - ceph versions - ceph -s - ceph health detail - ceph versions | jq -e '.mgr | length == 2' - ceph mgr fail - sleep 180 - ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1 - ceph orch ps --refresh - sleep 180 - ceph orch ps - ceph versions - ceph health detail - ceph -s - ceph mgr fail - sleep 180 - ceph orch ps - ceph versions - ceph -s - ceph health detail - ceph versions | jq -e '.mgr | length == 1' - ceph mgr fail - sleep 180 - ceph orch ps - ceph orch ls - ceph versions - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph versions | jq -e '.mgr | length == 1' - ceph versions | jq -e '.mgr | keys' | grep $sha1 - ceph versions | jq -e '.overall | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 2' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk '{print $2}') - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.mon | length == 2' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.y | awk '{print $2}') - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.mon | length == 1' - ceph versions | jq -e '.mon | keys' | grep $sha1 - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 5' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types osd --limit 2 - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.osd | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 7' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd --limit 1 - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.osd | length == 2' - ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '.up_to_date | length == 8' - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.osd | length == 1' - ceph versions | jq -e '.osd | keys' | grep $sha1 - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --services rgw.foo - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done - ceph orch ps - ceph versions | jq -e '.rgw | length == 1' - ceph versions | jq -e '.rgw | keys' | grep $sha1 - ceph orch upgrade status - ceph health detail - ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 - cephadm.shell: env: - sha1 mon.a: - while ceph orch upgrade status | jq '.in_progress' | grep true && ! ceph orch upgrade status | jq '.message' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done - ceph orch ps - ceph versions - echo "wait for servicemap items w/ changing names to refresh" - sleep 60 - ceph orch ps - ceph versions - ceph orch upgrade status - ceph health detail - ceph versions | jq -e '.overall | length == 1' - ceph versions | jq -e '.overall | keys' | grep $sha1 - ceph orch ls | grep '^osd ' - cephadm.shell: mon.a: - ceph orch upgrade ls - ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0 - ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2 teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-10_01:00:38 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-10T11:19:59.719 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa; will attempt to use it 2026-03-10T11:19:59.719 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks 2026-03-10T11:19:59.719 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-10T11:19:59.720 INFO:teuthology.task.internal:Checking packages... 2026-03-10T11:19:59.720 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-10T11:19:59.720 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-10T11:19:59.720 INFO:teuthology.packaging:ref: None 2026-03-10T11:19:59.720 INFO:teuthology.packaging:tag: None 2026-03-10T11:19:59.720 INFO:teuthology.packaging:branch: squid 2026-03-10T11:19:59.720 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:19:59.720 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-10T11:20:00.311 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-10T11:20:00.312 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-10T11:20:00.313 INFO:teuthology.task.internal:no buildpackages task found 2026-03-10T11:20:00.313 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-10T11:20:00.313 INFO:teuthology.task.internal:Saving configuration 2026-03-10T11:20:00.320 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-10T11:20:00.321 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-10T11:20:00.328 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 11:18:44.387630', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMB2kq8U/ALFBCaYezCfmrd+7bwx1dXmHS3TE+BOBJieFe5k8qM+Q4JN+AIl8ydThr5rYEjGM/p749LfqOoddY='} 2026-03-10T11:20:00.335 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm07.local', 'description': '/archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-10 11:18:44.387004', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:07', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhT8KRzrqKoyib/Z9qVKYbpEpMMbqwrn5+3Js+2ImW/3MOJkpX194fCXnFxMp9txsPnTK+AOakr6UVAbBBfvUQ='} 2026-03-10T11:20:00.335 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-10T11:20:00.336 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'node-exporter.a', 'alertmanager.a'] 2026-03-10T11:20:00.336 INFO:teuthology.task.internal:roles: ubuntu@vm07.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b'] 2026-03-10T11:20:00.336 INFO:teuthology.run_tasks:Running task console_log... 2026-03-10T11:20:00.342 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-10T11:20:00.349 DEBUG:teuthology.task.console_log:vm07 does not support IPMI; excluding 2026-03-10T11:20:00.349 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7efc88116170>, signals=[15]) 2026-03-10T11:20:00.349 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-10T11:20:00.350 INFO:teuthology.task.internal:Opening connections... 2026-03-10T11:20:00.350 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-10T11:20:00.351 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:20:00.416 DEBUG:teuthology.task.internal:connecting to ubuntu@vm07.local 2026-03-10T11:20:00.416 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:20:00.473 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-10T11:20:00.475 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-10T11:20:00.485 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-10T11:20:00.486 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:NAME="Ubuntu" 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="22.04" 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_CODENAME=jammy 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:ID=ubuntu 2026-03-10T11:20:00.531 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE=debian 2026-03-10T11:20:00.532 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T11:20:00.532 INFO:teuthology.orchestra.run.vm05.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T11:20:00.532 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T11:20:00.532 INFO:teuthology.orchestra.run.vm05.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T11:20:00.532 INFO:teuthology.orchestra.run.vm05.stdout:UBUNTU_CODENAME=jammy 2026-03-10T11:20:00.532 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-10T11:20:00.537 DEBUG:teuthology.orchestra.run.vm07:> uname -m 2026-03-10T11:20:00.548 INFO:teuthology.orchestra.run.vm07.stdout:x86_64 2026-03-10T11:20:00.548 DEBUG:teuthology.orchestra.run.vm07:> cat /etc/os-release 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:NAME="Ubuntu" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_ID="22.04" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:VERSION_CODENAME=jammy 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:ID=ubuntu 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:ID_LIKE=debian 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-10T11:20:00.594 INFO:teuthology.orchestra.run.vm07.stdout:UBUNTU_CODENAME=jammy 2026-03-10T11:20:00.595 INFO:teuthology.lock.ops:Updating vm07.local on lock server 2026-03-10T11:20:00.600 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-10T11:20:00.603 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-10T11:20:00.604 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-10T11:20:00.604 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-10T11:20:00.605 DEBUG:teuthology.orchestra.run.vm07:> test '!' -e /home/ubuntu/cephtest 2026-03-10T11:20:00.637 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-10T11:20:00.639 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-10T11:20:00.639 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-10T11:20:00.650 DEBUG:teuthology.orchestra.run.vm07:> test -z $(ls -A /var/lib/ceph) 2026-03-10T11:20:00.652 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T11:20:00.682 INFO:teuthology.orchestra.run.vm07.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-10T11:20:00.682 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-10T11:20:00.690 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-10T11:20:00.695 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:20:00.945 DEBUG:teuthology.orchestra.run.vm07:> test -e /ceph-qa-ready 2026-03-10T11:20:00.948 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:20:01.182 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-10T11:20:01.183 INFO:teuthology.task.internal:Creating test directory... 2026-03-10T11:20:01.183 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T11:20:01.185 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-10T11:20:01.189 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-10T11:20:01.190 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-10T11:20:01.192 INFO:teuthology.task.internal:Creating archive directory... 2026-03-10T11:20:01.192 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T11:20:01.230 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-10T11:20:01.236 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-10T11:20:01.237 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-10T11:20:01.238 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T11:20:01.275 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:20:01.275 DEBUG:teuthology.orchestra.run.vm07:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-10T11:20:01.278 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:20:01.279 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T11:20:01.317 DEBUG:teuthology.orchestra.run.vm07:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-10T11:20:01.324 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:20:01.327 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:20:01.331 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:20:01.337 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-10T11:20:01.337 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-10T11:20:01.339 INFO:teuthology.task.internal:Configuring sudo... 2026-03-10T11:20:01.339 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T11:20:01.369 DEBUG:teuthology.orchestra.run.vm07:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-10T11:20:01.386 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-10T11:20:01.390 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-10T11:20:01.390 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T11:20:01.417 DEBUG:teuthology.orchestra.run.vm07:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-10T11:20:01.433 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:20:01.463 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:20:01.506 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:20:01.506 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T11:20:01.555 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:20:01.559 DEBUG:teuthology.orchestra.run.vm07:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:20:01.605 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:20:01.605 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-10T11:20:01.654 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-10T11:20:01.655 DEBUG:teuthology.orchestra.run.vm07:> sudo service rsyslog restart 2026-03-10T11:20:01.716 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-10T11:20:01.718 INFO:teuthology.task.internal:Starting timer... 2026-03-10T11:20:01.718 INFO:teuthology.run_tasks:Running task pcp... 2026-03-10T11:20:01.721 INFO:teuthology.run_tasks:Running task selinux... 2026-03-10T11:20:01.724 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-10T11:20:01.724 INFO:teuthology.task.selinux:Excluding vm07: VMs are not yet supported 2026-03-10T11:20:01.724 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-10T11:20:01.724 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-10T11:20:01.724 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-10T11:20:01.724 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-10T11:20:01.726 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-10T11:20:01.726 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-10T11:20:01.728 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-10T11:20:02.344 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-10T11:20:02.350 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-10T11:20:02.350 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorygpiuqyw7 --limit vm05.local,vm07.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-10T11:22:28.775 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm05.local'), Remote(name='ubuntu@vm07.local')] 2026-03-10T11:22:28.776 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-10T11:22:28.776 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:22:28.834 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-10T11:22:29.057 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-10T11:22:29.057 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm07.local' 2026-03-10T11:22:29.057 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm07.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-10T11:22:29.117 DEBUG:teuthology.orchestra.run.vm07:> true 2026-03-10T11:22:29.336 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm07.local' 2026-03-10T11:22:29.337 INFO:teuthology.run_tasks:Running task clock... 2026-03-10T11:22:29.340 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-10T11:22:29.340 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T11:22:29.340 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:22:29.341 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-10T11:22:29.341 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Command line: ntpd -gq 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: ---------------------------------------------------- 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: corporation. Support and training for ntp-4 are 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: available at https://www.nwtime.org/support 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: ---------------------------------------------------- 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: proto: precision = 0.030 usec (-25) 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: basedate set to 2022-02-04 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: gps base set to 2022-02-06 (week 2196) 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T11:22:29.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stderr:10 Mar 11:22:29 ntpd[16073]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listen normally on 3 ens3 192.168.123.105:123 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listen normally on 4 lo [::1]:123 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:5%2]:123 2026-03-10T11:22:29.359 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:29 ntpd[16073]: Listening on routing socket on fd #22 for interface updates 2026-03-10T11:22:29.394 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-10T11:22:29.394 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Command line: ntpd -gq 2026-03-10T11:22:29.394 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: ---------------------------------------------------- 2026-03-10T11:22:29.394 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: ntp-4 is maintained by Network Time Foundation, 2026-03-10T11:22:29.394 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: corporation. Support and training for ntp-4 are 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: available at https://www.nwtime.org/support 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: ---------------------------------------------------- 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: proto: precision = 0.030 usec (-25) 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: basedate set to 2022-02-04 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: gps base set to 2022-02-06 (week 2196) 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-10T11:22:29.395 INFO:teuthology.orchestra.run.vm07.stderr:10 Mar 11:22:29 ntpd[16110]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 73 days ago 2026-03-10T11:22:29.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listen and drop on 0 v6wildcard [::]:123 2026-03-10T11:22:29.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-10T11:22:29.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listen normally on 2 lo 127.0.0.1:123 2026-03-10T11:22:29.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listen normally on 3 ens3 192.168.123.107:123 2026-03-10T11:22:29.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listen normally on 4 lo [::1]:123 2026-03-10T11:22:29.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:7%2]:123 2026-03-10T11:22:29.397 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:29 ntpd[16110]: Listening on routing socket on fd #22 for interface updates 2026-03-10T11:22:30.358 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:30 ntpd[16073]: Soliciting pool server 82.165.178.31 2026-03-10T11:22:30.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:30 ntpd[16110]: Soliciting pool server 82.165.178.31 2026-03-10T11:22:31.357 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:31 ntpd[16073]: Soliciting pool server 212.132.97.26 2026-03-10T11:22:31.357 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:31 ntpd[16073]: Soliciting pool server 141.144.246.224 2026-03-10T11:22:31.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:31 ntpd[16110]: Soliciting pool server 212.132.97.26 2026-03-10T11:22:31.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:31 ntpd[16110]: Soliciting pool server 141.144.246.224 2026-03-10T11:22:32.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:32 ntpd[16073]: Soliciting pool server 213.239.234.28 2026-03-10T11:22:32.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:32 ntpd[16073]: Soliciting pool server 85.121.52.237 2026-03-10T11:22:32.357 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:32 ntpd[16073]: Soliciting pool server 212.132.108.186 2026-03-10T11:22:32.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:32 ntpd[16110]: Soliciting pool server 213.239.234.28 2026-03-10T11:22:32.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:32 ntpd[16110]: Soliciting pool server 85.121.52.237 2026-03-10T11:22:32.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:32 ntpd[16110]: Soliciting pool server 212.132.108.186 2026-03-10T11:22:33.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:33 ntpd[16073]: Soliciting pool server 77.90.10.92 2026-03-10T11:22:33.368 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:33 ntpd[16073]: Soliciting pool server 195.201.107.151 2026-03-10T11:22:33.368 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:33 ntpd[16073]: Soliciting pool server 168.119.211.223 2026-03-10T11:22:33.369 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:33 ntpd[16073]: Soliciting pool server 94.130.23.46 2026-03-10T11:22:33.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:33 ntpd[16110]: Soliciting pool server 77.90.10.92 2026-03-10T11:22:33.414 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:33 ntpd[16110]: Soliciting pool server 195.201.107.151 2026-03-10T11:22:33.414 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:33 ntpd[16110]: Soliciting pool server 168.119.211.223 2026-03-10T11:22:33.414 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:33 ntpd[16110]: Soliciting pool server 94.130.23.46 2026-03-10T11:22:34.355 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:34 ntpd[16073]: Soliciting pool server 46.4.54.78 2026-03-10T11:22:34.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:34 ntpd[16073]: Soliciting pool server 185.13.148.71 2026-03-10T11:22:34.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:34 ntpd[16073]: Soliciting pool server 129.250.35.250 2026-03-10T11:22:34.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:34 ntpd[16073]: Soliciting pool server 185.125.190.57 2026-03-10T11:22:34.395 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:34 ntpd[16110]: Soliciting pool server 46.4.54.78 2026-03-10T11:22:34.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:34 ntpd[16110]: Soliciting pool server 185.13.148.71 2026-03-10T11:22:34.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:34 ntpd[16110]: Soliciting pool server 129.250.35.250 2026-03-10T11:22:34.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:34 ntpd[16110]: Soliciting pool server 185.125.190.57 2026-03-10T11:22:35.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:35 ntpd[16073]: Soliciting pool server 185.125.190.56 2026-03-10T11:22:35.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:35 ntpd[16073]: Soliciting pool server 116.202.118.202 2026-03-10T11:22:35.356 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:35 ntpd[16073]: Soliciting pool server 5.45.97.204 2026-03-10T11:22:35.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:35 ntpd[16110]: Soliciting pool server 185.125.190.56 2026-03-10T11:22:35.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:35 ntpd[16110]: Soliciting pool server 116.202.118.202 2026-03-10T11:22:35.396 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:35 ntpd[16110]: Soliciting pool server 5.45.97.204 2026-03-10T11:22:38.382 INFO:teuthology.orchestra.run.vm05.stdout:10 Mar 11:22:38 ntpd[16073]: ntpd: time slew -0.001775 s 2026-03-10T11:22:38.382 INFO:teuthology.orchestra.run.vm05.stdout:ntpd: time slew -0.001775s 2026-03-10T11:22:38.402 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:22:38.402 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-10T11:22:38.402 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.402 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.403 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.403 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.403 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.422 INFO:teuthology.orchestra.run.vm07.stdout:10 Mar 11:22:38 ntpd[16110]: ntpd: time slew +0.002139 s 2026-03-10T11:22:38.423 INFO:teuthology.orchestra.run.vm07.stdout:ntpd: time slew +0.002139s 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.444 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:22:38.444 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-10T11:22:38.488 INFO:tasks.cephadm:Config: {'cephadm_branch': 'v17.2.0', 'cephadm_git_url': 'https://github.com/ceph/ceph', 'image': 'quay.io/ceph/ceph:v17.2.0', 'conf': {'global': {'mon election default strategy': 3}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON', 'CEPHADM_AGENT_DOWN'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-10T11:22:38.488 INFO:tasks.cephadm:Cluster image is quay.io/ceph/ceph:v17.2.0 2026-03-10T11:22:38.488 INFO:tasks.cephadm:Cluster fsid is 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:22:38.488 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-10T11:22:38.488 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.105', 'mon.c': '[v2:192.168.123.105:3301,v1:192.168.123.105:6790]', 'mon.b': '192.168.123.107'} 2026-03-10T11:22:38.488 INFO:tasks.cephadm:First mon is mon.a on vm05 2026-03-10T11:22:38.488 INFO:tasks.cephadm:First mgr is y 2026-03-10T11:22:38.488 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-10T11:22:38.488 DEBUG:teuthology.orchestra.run.vm05:> sudo hostname $(hostname -s) 2026-03-10T11:22:38.495 DEBUG:teuthology.orchestra.run.vm07:> sudo hostname $(hostname -s) 2026-03-10T11:22:38.503 INFO:tasks.cephadm:Downloading cephadm (repo https://github.com/ceph/ceph ref v17.2.0)... 2026-03-10T11:22:38.503 DEBUG:teuthology.orchestra.run.vm05:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T11:22:38.806 INFO:teuthology.orchestra.run.vm05.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 10 11:22 /home/ubuntu/cephtest/cephadm 2026-03-10T11:22:38.806 DEBUG:teuthology.orchestra.run.vm07:> curl --silent https://raw.githubusercontent.com/ceph/ceph/v17.2.0/src/cephadm/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-10T11:22:38.890 INFO:teuthology.orchestra.run.vm07.stdout:-rw-rw-r-- 1 ubuntu ubuntu 320521 Mar 10 11:22 /home/ubuntu/cephtest/cephadm 2026-03-10T11:22:38.890 DEBUG:teuthology.orchestra.run.vm05:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T11:22:38.893 DEBUG:teuthology.orchestra.run.vm07:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-10T11:22:38.901 INFO:tasks.cephadm:Pulling image quay.io/ceph/ceph:v17.2.0 on all hosts... 2026-03-10T11:22:38.901 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-10T11:22:38.934 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 pull 2026-03-10T11:22:39.013 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-10T11:22:39.018 INFO:teuthology.orchestra.run.vm07.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-10T11:23:02.969 INFO:teuthology.orchestra.run.vm07.stdout:{ 2026-03-10T11:23:02.970 INFO:teuthology.orchestra.run.vm07.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-10T11:23:02.970 INFO:teuthology.orchestra.run.vm07.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-10T11:23:02.970 INFO:teuthology.orchestra.run.vm07.stdout: "repo_digests": [ 2026-03-10T11:23:02.970 INFO:teuthology.orchestra.run.vm07.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-10T11:23:02.970 INFO:teuthology.orchestra.run.vm07.stdout: ] 2026-03-10T11:23:02.970 INFO:teuthology.orchestra.run.vm07.stdout:} 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout: "ceph_version": "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)", 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout: "image_id": "e1d6a67b021eb077ee22bf650f1a9fb1980a2cf5c36bdb9cba9eac6de8f702d9", 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout: "repo_digests": [ 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout: "quay.io/ceph/ceph@sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a" 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T11:23:02.982 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:23:02.993 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph 2026-03-10T11:23:02.999 DEBUG:teuthology.orchestra.run.vm07:> sudo mkdir -p /etc/ceph 2026-03-10T11:23:03.006 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /etc/ceph 2026-03-10T11:23:03.048 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 777 /etc/ceph 2026-03-10T11:23:03.056 INFO:tasks.cephadm:Writing seed config... 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-10T11:23:03.056 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-10T11:23:03.057 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:03.057 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-10T11:23:03.092 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = 72041074-1c73-11f1-8607-4fca9a5e0a4d mon election default strategy = 3 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-10T11:23:03.092 DEBUG:teuthology.orchestra.run.vm05:mon.a> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service 2026-03-10T11:23:03.134 DEBUG:teuthology.orchestra.run.vm05:mgr.y> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service 2026-03-10T11:23:03.178 INFO:tasks.cephadm:Bootstrapping... 2026-03-10T11:23:03.178 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 -v bootstrap --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.105 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:03.303 INFO:teuthology.orchestra.run.vm05.stderr:-------------------------------------------------------------------------------- 2026-03-10T11:23:03.303 INFO:teuthology.orchestra.run.vm05.stderr:cephadm ['--image', 'quay.io/ceph/ceph:v17.2.0', '-v', 'bootstrap', '--fsid', '72041074-1c73-11f1-8607-4fca9a5e0a4d', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.105', '--skip-admin-label'] 2026-03-10T11:23:03.303 INFO:teuthology.orchestra.run.vm05.stderr:Verifying podman|docker is present... 2026-03-10T11:23:03.303 INFO:teuthology.orchestra.run.vm05.stderr:Verifying lvm2 is present... 2026-03-10T11:23:03.303 INFO:teuthology.orchestra.run.vm05.stderr:Verifying time synchronization is in place... 2026-03-10T11:23:03.306 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T11:23:03.308 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.310 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T11:23:03.312 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.314 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: masked 2026-03-10T11:23:03.316 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.318 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T11:23:03.321 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.323 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: enabled 2026-03-10T11:23:03.325 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: active 2026-03-10T11:23:03.326 INFO:teuthology.orchestra.run.vm05.stderr:Unit ntp.service is enabled and running 2026-03-10T11:23:03.326 INFO:teuthology.orchestra.run.vm05.stderr:Repeating the final host check... 2026-03-10T11:23:03.326 INFO:teuthology.orchestra.run.vm05.stderr:docker (/usr/bin/docker) is present 2026-03-10T11:23:03.326 INFO:teuthology.orchestra.run.vm05.stderr:systemctl is present 2026-03-10T11:23:03.326 INFO:teuthology.orchestra.run.vm05.stderr:lvcreate is present 2026-03-10T11:23:03.328 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to get unit file state for chrony.service: No such file or directory 2026-03-10T11:23:03.330 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.332 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to get unit file state for chronyd.service: No such file or directory 2026-03-10T11:23:03.334 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.336 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: masked 2026-03-10T11:23:03.338 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.340 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to get unit file state for ntpd.service: No such file or directory 2026-03-10T11:23:03.342 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: inactive 2026-03-10T11:23:03.345 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: enabled 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: active 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Unit ntp.service is enabled and running 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Host looks OK 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Cluster fsid: 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Acquiring lock 139799831340000 on /run/cephadm/72041074-1c73-11f1-8607-4fca9a5e0a4d.lock 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Lock 139799831340000 acquired on /run/cephadm/72041074-1c73-11f1-8607-4fca9a5e0a4d.lock 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Verifying IP 192.168.123.105 port 3300 ... 2026-03-10T11:23:03.347 INFO:teuthology.orchestra.run.vm05.stderr:Verifying IP 192.168.123.105 port 6789 ... 2026-03-10T11:23:03.348 INFO:teuthology.orchestra.run.vm05.stderr:Base mon IP is 192.168.123.105, final addrv is [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-10T11:23:03.349 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.105 metric 100 2026-03-10T11:23:03.349 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-10T11:23:03.349 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.105 metric 100 2026-03-10T11:23:03.349 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.105 metric 100 2026-03-10T11:23:03.350 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: ::1 dev lo proto kernel metric 256 pref medium 2026-03-10T11:23:03.350 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: inet6 ::1/128 scope host 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: 2: ens3: mtu 1500 state UP qlen 1000 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: inet6 fe80::5055:ff:fe00:5/64 scope link 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:/usr/sbin/ip: valid_lft forever preferred_lft forever 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:Mon IP `192.168.123.105` is in CIDR network `192.168.123.0/24` 2026-03-10T11:23:03.351 INFO:teuthology.orchestra.run.vm05.stderr:- internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-10T11:23:03.352 INFO:teuthology.orchestra.run.vm05.stderr:Pulling container image quay.io/ceph/ceph:v17.2.0... 2026-03-10T11:23:04.384 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/docker: v17.2.0: Pulling from ceph/ceph 2026-03-10T11:23:04.388 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/docker: Digest: sha256:12a0a4f43413fd97a14a3d47a3451b2d2df50020835bb93db666209f3f77617a 2026-03-10T11:23:04.388 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/docker: Status: Image is up to date for quay.io/ceph/ceph:v17.2.0 2026-03-10T11:23:04.389 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/docker: quay.io/ceph/ceph:v17.2.0 2026-03-10T11:23:04.519 INFO:teuthology.orchestra.run.vm05.stderr:ceph: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-10T11:23:04.556 INFO:teuthology.orchestra.run.vm05.stderr:Ceph version: ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable) 2026-03-10T11:23:04.556 INFO:teuthology.orchestra.run.vm05.stderr:Extracting ceph user uid/gid from container image... 2026-03-10T11:23:04.740 INFO:teuthology.orchestra.run.vm05.stderr:stat: 167 167 2026-03-10T11:23:04.767 INFO:teuthology.orchestra.run.vm05.stderr:Creating initial keys... 2026-03-10T11:23:04.841 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-authtool: AQAY/69pjLH+MRAAdis6hZ9efaEoYRoFO+0b0w== 2026-03-10T11:23:04.961 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-authtool: AQAY/69pRgUpORAAMC1I7PObQOb2jtWDakJh3Q== 2026-03-10T11:23:05.066 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-authtool: AQAZ/69pVE/SAxAA6FujR3ISs5Iq18XM4KP7xQ== 2026-03-10T11:23:05.096 INFO:teuthology.orchestra.run.vm05.stderr:Creating initial monmap... 2026-03-10T11:23:05.185 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T11:23:05.185 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/monmaptool: setting min_mon_release = octopus 2026-03-10T11:23:05.185 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: set fsid to 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:05.185 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/monmaptool: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T11:23:05.210 INFO:teuthology.orchestra.run.vm05.stderr:monmaptool for a [v2:192.168.123.105:3300,v1:192.168.123.105:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-10T11:23:05.210 INFO:teuthology.orchestra.run.vm05.stderr:setting min_mon_release = octopus 2026-03-10T11:23:05.210 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/monmaptool: set fsid to 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:05.210 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-10T11:23:05.210 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:05.210 INFO:teuthology.orchestra.run.vm05.stderr:Creating mon... 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.284+0000 7f7031d78880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.284+0000 7f7031d78880 1 imported monmap: 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: epoch 0 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: last_changed 2026-03-10T11:23:05.182054+0000 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: min_mon_release 15 (octopus) 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: election_strategy: 1 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.291 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.284+0000 7f7031d78880 0 /usr/bin/ceph-mon: set fsid to 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: RocksDB version: 6.15.5 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Compile date Apr 18 2022 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: DB SUMMARY 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: DB Session ID: 8X6L522V1O1O2RAJULPL 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-10T11:23:05.335 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.create_if_missing: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.env: 0x5622400c6860 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.fs: Posix File System 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.info_log: 0x562259609320 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.db_log_dir: 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.write_buffer_manager: 0x5622598a9950 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.row_cache: None 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.wal_filter: None 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.preserve_deletes: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:23:05.336 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Compression algorithms supported: 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kZSTD supported: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.324+0000 7f7031d78880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: [db/db_impl/db_impl_open.cc:281] Creating manifest 1 2026-03-10T11:23:05.337 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.merge_operator: 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5622595d2d10) 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks: 1 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: pin_top_level_index_and_filter: 1 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: index_type: 0 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: data_block_index_type: 0 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: index_shortening: 1 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: hash_index_allow_collision: 1 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: checksum: 1 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: no_block_cache: 0 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_cache: 0x56225963a170 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_cache_name: BinnedLRUCache 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_cache_options: 2026-03-10T11:23:05.341 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: capacity : 536870912 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: num_shard_bits : 4 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: strict_capacity_limit : 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: high_pri_pool_ratio: 0.000 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_cache_compressed: (nil) 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: persistent_cache: (nil) 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_size: 4096 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_size_deviation: 10 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_restart_interval: 16 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: index_block_restart_interval: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: metadata_block_size: 4096 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: partition_filters: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: use_delta_encoding: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: whole_key_filtering: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: verify_compression: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: read_amp_bytes_per_bit: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: format_version: 4 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: enable_index_compression: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: block_align: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.num_levels: 7 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:23:05.342 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.table_properties_collectors: 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 0 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.328+0000 7f7031d78880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 3 2026-03-10T11:23:05.343 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.336+0000 7f7031d78880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x562259620700 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.336+0000 7f7031d78880 4 rocksdb: DB pointer 0x562259694000 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.336+0000 7f7023962700 4 rocksdb: [db/db_impl/db_impl.cc:902] ------- DUMPING STATS ------- 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.336+0000 7f7023962700 4 rocksdb: [db/db_impl/db_impl.cc:903] 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** DB Stats ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** Compaction Stats [default] ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Flush(GB): cumulative 0.000, interval 0.000 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: AddFile(Keys): cumulative 0, interval 0 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: ** File Read Latency Histogram By Level [default] ** 2026-03-10T11:23:05.346 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: 2026-03-10T11:23:05.347 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.340+0000 7f7031d78880 4 rocksdb: [db/db_impl/db_impl.cc:447] Shutdown: canceling all background work 2026-03-10T11:23:05.347 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.340+0000 7f7031d78880 4 rocksdb: [db/db_impl/db_impl.cc:625] Shutdown complete 2026-03-10T11:23:05.347 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph-mon: debug 2026-03-10T11:23:05.340+0000 7f7031d78880 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-10T11:23:05.377 INFO:teuthology.orchestra.run.vm05.stderr:create mon.a on 2026-03-10T11:23:05.536 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-10T11:23:05.708 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d.target → /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d.target. 2026-03-10T11:23:05.708 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Created symlink /etc/systemd/system/ceph.target.wants/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d.target → /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d.target. 2026-03-10T11:23:06.033 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to reset failed state of unit ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service: Unit ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service not loaded. 2026-03-10T11:23:06.035 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Created symlink /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d.target.wants/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service → /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service. 2026-03-10T11:23:06.216 INFO:teuthology.orchestra.run.vm05.stderr:firewalld does not appear to be present 2026-03-10T11:23:06.216 INFO:teuthology.orchestra.run.vm05.stderr:Not possible to enable service . firewalld.service is not available 2026-03-10T11:23:06.216 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for mon to start... 2026-03-10T11:23:06.216 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for mon... 2026-03-10T11:23:06.231 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 systemd[1]: Started Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: cluster: 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: id: 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: health: HEALTH_OK 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: services: 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon: 1 daemons, quorum a (age 0.0716356s) 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mgr: no daemons active 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd: 0 osds: 0 up, 0 in 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: data: 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: pools: 0 pools, 0 pgs 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: objects: 0 objects, 0 B 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: usage: 0 B used, 0 B / 0 B avail 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: pgs: 2026-03-10T11:23:06.431 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:06.480 INFO:teuthology.orchestra.run.vm05.stderr:mon is available 2026-03-10T11:23:06.480 INFO:teuthology.orchestra.run.vm05.stderr:Assimilating anything we can from ceph.conf... 2026-03-10T11:23:06.527 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 bash[17004]: cluster 2026-03-10T11:23:06.357017+0000 mon.a (mon.0) 0 : cluster [INF] mkfs 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:06.528 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 bash[17004]: cluster 2026-03-10T11:23:06.351970+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: [global] 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: fsid = 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_host = [v2:192.168.123.105:3300,v1:192.168.123.105:6789] 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: [mgr] 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mgr/cephadm/use_agent = False 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: [osd] 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-10T11:23:06.655 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-10T11:23:06.696 INFO:teuthology.orchestra.run.vm05.stderr:Generating new minimal ceph.conf... 2026-03-10T11:23:06.925 INFO:teuthology.orchestra.run.vm05.stderr:Restarting the monitor... 2026-03-10T11:23:07.054 INFO:teuthology.orchestra.run.vm05.stderr:Setting mon public_network to 192.168.123.0/24 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 systemd[1]: Stopping Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 bash[17372]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon.a 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 bash[17004]: debug 2026-03-10T11:23:06.944+0000 7febb4ee2700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:06 vm05 bash[17004]: debug 2026-03-10T11:23:06.944+0000 7febb4ee2700 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17379]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon-a 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17412]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon.a 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service: Deactivated successfully. 2026-03-10T11:23:07.153 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 systemd[1]: Stopped Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:23:07.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 systemd[1]: Started Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:23:07.333 INFO:teuthology.orchestra.run.vm05.stderr:Wrote config to /etc/ceph/ceph.conf 2026-03-10T11:23:07.333 INFO:teuthology.orchestra.run.vm05.stderr:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:07.333 INFO:teuthology.orchestra.run.vm05.stderr:Creating mgr... 2026-03-10T11:23:07.333 INFO:teuthology.orchestra.run.vm05.stderr:Verifying port 9283 ... 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 0 ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable), process ceph-mon, pid 8 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 0 pidfile_write: ignore empty --pid-file 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 0 load: jerasure load: lrc 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: RocksDB version: 6.15.5 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Git sha rocksdb_build_git_sha:@0@ 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Compile date Apr 18 2022 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: DB SUMMARY 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: DB Session ID: HCJJSJ3IYKSYBYW13470 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: CURRENT file: CURRENT 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T11:23:07.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 131 Bytes 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000010.log size: 73715 ; 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.create_if_missing: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.env: 0x55c6a8ec6860 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.fs: Posix File System 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.info_log: 0x55c6cda97e00 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.db_log_dir: 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.write_buffer_manager: 0x55c6cdb86240 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.row_cache: None 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.wal_filter: None 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.preserve_deletes: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:23:07.448 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.176+0000 7fa7de4a9880 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Compression algorithms supported: 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kZSTD supported: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: [db/version_set.cc:4725] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: [db/column_family.cc:597] --------------- Options for column family [default]: 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.merge_operator: 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c6cda65d00) 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cache_index_and_filter_blocks: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: pin_top_level_index_and_filter: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: index_type: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: data_block_index_type: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: index_shortening: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: hash_index_allow_collision: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: checksum: 1 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: no_block_cache: 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_cache: 0x55c6cdacc170 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_cache_name: BinnedLRUCache 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_cache_options: 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: capacity : 536870912 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: num_shard_bits : 4 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: strict_capacity_limit : 0 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: high_pri_pool_ratio: 0.000 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_cache_compressed: (nil) 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: persistent_cache: (nil) 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_size: 4096 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_size_deviation: 10 2026-03-10T11:23:07.449 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_restart_interval: 16 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: index_block_restart_interval: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: metadata_block_size: 4096 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: partition_filters: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: use_delta_encoding: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: filter_policy: rocksdb.BuiltinBloomFilter 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: whole_key_filtering: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: verify_compression: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: read_amp_bytes_per_bit: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: format_version: 4 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: enable_index_compression: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: block_align: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.num_levels: 7 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.arena_block_size: 4194304 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:23:07.450 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.table_properties_collectors: 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.180+0000 7fa7de4a9880 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.192+0000 7fa7de4a9880 4 rocksdb: [db/version_set.cc:4773] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 11, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.192+0000 7fa7de4a9880 4 rocksdb: [db/version_set.cc:4782] Column family [default] (ID 0), log number is 5 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.192+0000 7fa7de4a9880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 13 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.192+0000 7fa7de4a9880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773141787198773, "job": 1, "event": "recovery_started", "wal_files": [10]} 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.192+0000 7fa7de4a9880 4 rocksdb: [db/db_impl/db_impl_open.cc:847] Recovering log #10 mode 2 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.192+0000 7fa7de4a9880 3 rocksdb: [table/block_based/filter_policy.cc:996] Using legacy Bloom filter with high (20) bits/key. Dramatic filter space and/or accuracy improvement is available with format_version>=5. 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.196+0000 7fa7de4a9880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773141787200233, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 14, "file_size": 70687, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 69004, "index_size": 176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 581, "raw_key_size": 9687, "raw_average_key_size": 49, "raw_value_size": 63573, "raw_average_value_size": 324, "num_data_blocks": 8, "num_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1773141787, "oldest_key_time": 0, "file_creation_time": 0, "db_id": "3aaeb5bb-1366-4a1f-a6d8-6137f3cd1b80", "db_session_id": "HCJJSJ3IYKSYBYW13470"}} 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.196+0000 7fa7de4a9880 4 rocksdb: [db/version_set.cc:4083] Creating manifest 15 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.196+0000 7fa7de4a9880 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773141787201785, "job": 1, "event": "recovery_finished"} 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.196+0000 7fa7de4a9880 4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000010.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.200+0000 7fa7de4a9880 4 rocksdb: [db/db_impl/db_impl_open.cc:1701] SstFileManager instance 0x55c6cdab2700 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: debug 2026-03-10T11:23:07.200+0000 7fa7de4a9880 4 rocksdb: DB pointer 0x55c6cdb26000 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cluster 2026-03-10T11:23:07.210604+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cluster 2026-03-10T11:23:07.210643+0000 mon.a (mon.0) 2 : cluster [DBG] monmap e1: 1 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0]} 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cluster 2026-03-10T11:23:07.210685+0000 mon.a (mon.0) 3 : cluster [DBG] fsmap 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cluster 2026-03-10T11:23:07.210698+0000 mon.a (mon.0) 4 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-10T11:23:07.451 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 bash[17453]: cluster 2026-03-10T11:23:07.211184+0000 mon.a (mon.0) 5 : cluster [DBG] mgrmap e1: no daemons active 2026-03-10T11:23:07.498 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Failed to reset failed state of unit ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service: Unit ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service not loaded. 2026-03-10T11:23:07.501 INFO:teuthology.orchestra.run.vm05.stderr:systemctl: Created symlink /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d.target.wants/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service → /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service. 2026-03-10T11:23:07.697 INFO:teuthology.orchestra.run.vm05.stderr:firewalld does not appear to be present 2026-03-10T11:23:07.697 INFO:teuthology.orchestra.run.vm05.stderr:Not possible to enable service . firewalld.service is not available 2026-03-10T11:23:07.697 INFO:teuthology.orchestra.run.vm05.stderr:firewalld does not appear to be present 2026-03-10T11:23:07.697 INFO:teuthology.orchestra.run.vm05.stderr:Not possible to open ports <[9283]>. firewalld.service is not available 2026-03-10T11:23:07.697 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for mgr to start... 2026-03-10T11:23:07.697 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for mgr... 2026-03-10T11:23:07.714 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:07.714 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:07.933 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsid": "72041074-1c73-11f1-8607-4fca9a5e0a4d", 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "health": { 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 0 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "a" 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_age": 0, 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.934 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "available": false, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "iostat", 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "nfs", 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "restful" 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modified": "2026-03-10T11:23:06.357250+0000", 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T11:23:07.935 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:07.965 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:07 vm05 bash[17722]: debug 2026-03-10T11:23:07.912+0000 7f8510ade000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:23:07.987 INFO:teuthology.orchestra.run.vm05.stderr:mgr not available, waiting (1/15)... 2026-03-10T11:23:08.277 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:07 vm05 bash[17722]: debug 2026-03-10T11:23:07.980+0000 7f8510ade000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:08.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:08 vm05 bash[17453]: audit 2026-03-10T11:23:07.274809+0000 mon.a (mon.0) 6 : audit [INF] from='client.? 192.168.123.105:0/3056741632' entity='client.admin' 2026-03-10T11:23:08.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:08 vm05 bash[17453]: audit 2026-03-10T11:23:07.930092+0000 mon.a (mon.0) 7 : audit [DBG] from='client.? 192.168.123.105:0/2275069396' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:23:08.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:08 vm05 bash[17722]: debug 2026-03-10T11:23:08.280+0000 7f8510ade000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:23:09.066 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:08 vm05 bash[17722]: debug 2026-03-10T11:23:08.764+0000 7f8510ade000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:23:09.066 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:08 vm05 bash[17722]: debug 2026-03-10T11:23:08.860+0000 7f8510ade000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:23:09.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.060+0000 7f8510ade000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:23:09.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.164+0000 7f8510ade000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:23:09.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.216+0000 7f8510ade000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:23:09.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.348+0000 7f8510ade000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:23:09.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.404+0000 7f8510ade000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:23:09.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.476+0000 7f8510ade000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsid": "72041074-1c73-11f1-8607-4fca9a5e0a4d", 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "health": { 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 0 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "a" 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_age": 3, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T11:23:10.231 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "available": false, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "iostat", 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "nfs", 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "restful" 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modified": "2026-03-10T11:23:06.357250+0000", 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T11:23:10.232 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:10.262 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:09 vm05 bash[17722]: debug 2026-03-10T11:23:09.988+0000 7f8510ade000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:23:10.262 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.044+0000 7f8510ade000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:23:10.262 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.120+0000 7f8510ade000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:23:10.286 INFO:teuthology.orchestra.run.vm05.stderr:mgr not available, waiting (2/15)... 2026-03-10T11:23:10.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:10 vm05 bash[17453]: audit 2026-03-10T11:23:10.226834+0000 mon.a (mon.0) 8 : audit [DBG] from='client.? 192.168.123.105:0/2025739373' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:23:10.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.460+0000 7f8510ade000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:23:10.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.524+0000 7f8510ade000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:23:10.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.584+0000 7f8510ade000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:23:10.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.684+0000 7f8510ade000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:11.271 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:10 vm05 bash[17722]: debug 2026-03-10T11:23:10.992+0000 7f8510ade000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:23:11.271 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:11 vm05 bash[17722]: debug 2026-03-10T11:23:11.196+0000 7f8510ade000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:23:11.271 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:11 vm05 bash[17722]: debug 2026-03-10T11:23:11.264+0000 7f8510ade000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:23:11.597 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:11 vm05 bash[17722]: debug 2026-03-10T11:23:11.332+0000 7f8510ade000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:23:11.597 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:11 vm05 bash[17722]: debug 2026-03-10T11:23:11.504+0000 7f8510ade000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: cluster 2026-03-10T11:23:12.054728+0000 mon.a (mon.0) 9 : cluster [INF] Activating manager daemon y 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: cluster 2026-03-10T11:23:12.057690+0000 mon.a (mon.0) 10 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00303857s) 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.060446+0000 mon.a (mon.0) 11 : audit [DBG] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.060591+0000 mon.a (mon.0) 12 : audit [DBG] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.060666+0000 mon.a (mon.0) 13 : audit [DBG] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.060860+0000 mon.a (mon.0) 14 : audit [DBG] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.061776+0000 mon.a (mon.0) 15 : audit [DBG] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:23:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: cluster 2026-03-10T11:23:12.066035+0000 mon.a (mon.0) 16 : cluster [INF] Manager daemon y is now available 2026-03-10T11:23:12.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.075219+0000 mon.a (mon.0) 17 : audit [INF] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:23:12.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.076133+0000 mon.a (mon.0) 18 : audit [INF] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:23:12.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.090662+0000 mon.a (mon.0) 19 : audit [INF] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' 2026-03-10T11:23:12.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.093661+0000 mon.a (mon.0) 20 : audit [INF] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' 2026-03-10T11:23:12.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:12 vm05 bash[17453]: audit 2026-03-10T11:23:12.103916+0000 mon.a (mon.0) 21 : audit [INF] from='mgr.14100 192.168.123.105:0/1284505453' entity='mgr.y' 2026-03-10T11:23:12.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:12 vm05 bash[17722]: debug 2026-03-10T11:23:12.048+0000 7f8510ade000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsid": "72041074-1c73-11f1-8607-4fca9a5e0a4d", 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "health": { 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 0 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "a" 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:12.697 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_age": 5, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T11:23:12.698 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "available": false, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "iostat", 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "nfs", 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "restful" 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modified": "2026-03-10T11:23:06.357250+0000", 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T11:23:12.699 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:12.735 INFO:teuthology.orchestra.run.vm05.stderr:mgr not available, waiting (3/15)... 2026-03-10T11:23:13.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:13 vm05 bash[17453]: audit 2026-03-10T11:23:12.694304+0000 mon.a (mon.0) 22 : audit [DBG] from='client.? 192.168.123.105:0/2991825350' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:23:13.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:13 vm05 bash[17453]: cluster 2026-03-10T11:23:13.062103+0000 mon.a (mon.0) 23 : cluster [DBG] mgrmap e3: y(active, since 1.00745s) 2026-03-10T11:23:15.022 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:15.022 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:15.022 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsid": "72041074-1c73-11f1-8607-4fca9a5e0a4d", 2026-03-10T11:23:15.022 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "health": { 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "status": "HEALTH_OK", 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "checks": {}, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mutes": [] 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "election_epoch": 5, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum": [ 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 0 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_names": [ 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "a" 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "quorum_age": 7, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "monmap": { 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "min_mon_release_name": "quincy", 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_mons": 1 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osdmap": { 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_osds": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_up_osds": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_up_since": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_in_osds": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "osd_in_since": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_remapped_pgs": 0 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgmap": { 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "pgs_by_state": [], 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pgs": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_pools": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_objects": 0, 2026-03-10T11:23:15.023 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "data_bytes": 0, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_used": 0, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_avail": 0, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "bytes_total": 0 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "fsmap": { 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "by_rank": [], 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "up:standby": 0 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mgrmap": { 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "available": true, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_standbys": 0, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modules": [ 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "iostat", 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "nfs", 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "restful" 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ], 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "servicemap": { 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 1, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "modified": "2026-03-10T11:23:06.357250+0000", 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "services": {} 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: }, 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "progress_events": {} 2026-03-10T11:23:15.024 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:15.061 INFO:teuthology.orchestra.run.vm05.stderr:mgr is available 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: [global] 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: fsid = 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_osd_allow_pg_remap = true 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_osd_allow_primary_affinity = true 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mon_warn_on_no_sortbitwise = false 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_crush_chooseleaf_type = 0 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: [mgr] 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: mgr/telemetry/nag = false 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: [osd] 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_map_max_advance = 10 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_mclock_iops_capacity_threshold_hdd = 49000 2026-03-10T11:23:15.295 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: osd_sloppy_crc = true 2026-03-10T11:23:15.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:15 vm05 bash[17453]: cluster 2026-03-10T11:23:14.270317+0000 mon.a (mon.0) 24 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-10T11:23:15.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:15 vm05 bash[17453]: audit 2026-03-10T11:23:15.018816+0000 mon.a (mon.0) 25 : audit [DBG] from='client.? 192.168.123.105:0/689598303' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-10T11:23:15.365 INFO:teuthology.orchestra.run.vm05.stderr:Enabling cephadm module... 2026-03-10T11:23:16.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:16 vm05 bash[17453]: audit 2026-03-10T11:23:15.286178+0000 mon.a (mon.0) 26 : audit [INF] from='client.? 192.168.123.105:0/3461664862' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-10T11:23:16.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:16 vm05 bash[17453]: audit 2026-03-10T11:23:15.289064+0000 mon.a (mon.0) 27 : audit [INF] from='client.? 192.168.123.105:0/3461664862' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-10T11:23:16.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:16 vm05 bash[17453]: audit 2026-03-10T11:23:15.633711+0000 mon.a (mon.0) 28 : audit [INF] from='client.? 192.168.123.105:0/3162978208' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-10T11:23:16.597 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:16 vm05 bash[17722]: ignoring --setuser ceph since I am not root 2026-03-10T11:23:16.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:16 vm05 bash[17722]: ignoring --setgroup ceph since I am not root 2026-03-10T11:23:16.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:16 vm05 bash[17722]: debug 2026-03-10T11:23:16.416+0000 7f1514b02000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:23:16.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:16 vm05 bash[17722]: debug 2026-03-10T11:23:16.464+0000 7f1514b02000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:16.618 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:16.618 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 5, 2026-03-10T11:23:16.618 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "available": true, 2026-03-10T11:23:16.618 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-10T11:23:16.619 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-10T11:23:16.619 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:16.671 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for the mgr to restart... 2026-03-10T11:23:16.671 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for mgr epoch 5... 2026-03-10T11:23:17.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:16 vm05 bash[17722]: debug 2026-03-10T11:23:16.800+0000 7f1514b02000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:23:17.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:17 vm05 bash[17453]: audit 2026-03-10T11:23:16.292529+0000 mon.a (mon.0) 29 : audit [INF] from='client.? 192.168.123.105:0/3162978208' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-10T11:23:17.588 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:17 vm05 bash[17453]: cluster 2026-03-10T11:23:16.292682+0000 mon.a (mon.0) 30 : cluster [DBG] mgrmap e5: y(active, since 4s) 2026-03-10T11:23:17.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:17 vm05 bash[17453]: audit 2026-03-10T11:23:16.613480+0000 mon.a (mon.0) 31 : audit [DBG] from='client.? 192.168.123.105:0/712902762' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:23:17.589 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.296+0000 7f1514b02000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:23:17.589 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.384+0000 7f1514b02000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:23:17.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.580+0000 7f1514b02000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:23:17.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.684+0000 7f1514b02000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:23:17.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.732+0000 7f1514b02000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:23:18.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.872+0000 7f1514b02000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:23:18.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:17 vm05 bash[17722]: debug 2026-03-10T11:23:17.936+0000 7f1514b02000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:23:18.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:18 vm05 bash[17722]: debug 2026-03-10T11:23:18.008+0000 7f1514b02000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:18.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:18 vm05 bash[17722]: debug 2026-03-10T11:23:18.560+0000 7f1514b02000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:23:18.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:18 vm05 bash[17722]: debug 2026-03-10T11:23:18.620+0000 7f1514b02000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:23:18.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:18 vm05 bash[17722]: debug 2026-03-10T11:23:18.680+0000 7f1514b02000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:23:19.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.008+0000 7f1514b02000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:23:19.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.068+0000 7f1514b02000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:23:19.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.132+0000 7f1514b02000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:23:19.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.220+0000 7f1514b02000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:19.837 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.536+0000 7f1514b02000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:23:19.837 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.716+0000 7f1514b02000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:23:19.837 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.772+0000 7f1514b02000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:23:20.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.828+0000 7f1514b02000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:23:20.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:19 vm05 bash[17722]: debug 2026-03-10T11:23:19.968+0000 7f1514b02000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:20.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:20 vm05 bash[17453]: cluster 2026-03-10T11:23:20.465689+0000 mon.a (mon.0) 32 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:23:20.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:20 vm05 bash[17453]: cluster 2026-03-10T11:23:20.466417+0000 mon.a (mon.0) 33 : cluster [INF] Activating manager daemon y 2026-03-10T11:23:20.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:20 vm05 bash[17453]: cluster 2026-03-10T11:23:20.468585+0000 mon.a (mon.0) 34 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-10T11:23:20.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:20 vm05 bash[17722]: debug 2026-03-10T11:23:20.460+0000 7f1514b02000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:23:21.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:21 vm05 bash[17722]: [10/Mar/2026:11:23:21] ENGINE Bus STARTING 2026-03-10T11:23:21.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:21 vm05 bash[17722]: [10/Mar/2026:11:23:21] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:23:21.531 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:21 vm05 bash[17722]: [10/Mar/2026:11:23:21] ENGINE Bus STARTED 2026-03-10T11:23:21.539 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:21.539 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mgrmap_epoch": 7, 2026-03-10T11:23:21.539 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "initialized": true 2026-03-10T11:23:21.539 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:21.581 INFO:teuthology.orchestra.run.vm05.stderr:mgr epoch 5 is available 2026-03-10T11:23:21.581 INFO:teuthology.orchestra.run.vm05.stderr:Setting orchestrator backend to cephadm... 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: cluster 2026-03-10T11:23:20.520209+0000 mon.a (mon.0) 35 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0538752s) 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.524719+0000 mon.a (mon.0) 36 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.525138+0000 mon.a (mon.0) 37 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.525888+0000 mon.a (mon.0) 38 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.526086+0000 mon.a (mon.0) 39 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.526278+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: cluster 2026-03-10T11:23:20.531326+0000 mon.a (mon.0) 41 : cluster [INF] Manager daemon y is now available 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.541011+0000 mon.a (mon.0) 42 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.543889+0000 mon.a (mon.0) 43 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.551951+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.552886+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.554171+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.557702+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:20.564462+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: cephadm 2026-03-10T11:23:21.258752+0000 mgr.y (mgr.14120) 1 : cephadm [INF] [10/Mar/2026:11:23:21] ENGINE Bus STARTING 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:21.372820+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:21.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:21 vm05 bash[17453]: audit 2026-03-10T11:23:21.423136+0000 mon.a (mon.0) 50 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:22.114 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: value unchanged 2026-03-10T11:23:22.209 INFO:teuthology.orchestra.run.vm05.stderr:Generating ssh key... 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: cephadm 2026-03-10T11:23:21.369076+0000 mgr.y (mgr.14120) 2 : cephadm [INF] [10/Mar/2026:11:23:21] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: cephadm 2026-03-10T11:23:21.369158+0000 mgr.y (mgr.14120) 3 : cephadm [INF] [10/Mar/2026:11:23:21] ENGINE Bus STARTED 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: cluster 2026-03-10T11:23:21.527397+0000 mon.a (mon.0) 51 : cluster [DBG] mgrmap e7: y(active, since 1.06106s) 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: audit 2026-03-10T11:23:21.531997+0000 mgr.y (mgr.14120) 4 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: audit 2026-03-10T11:23:21.536133+0000 mgr.y (mgr.14120) 5 : audit [DBG] from='client.14124 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: audit 2026-03-10T11:23:21.828805+0000 mgr.y (mgr.14120) 6 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: audit 2026-03-10T11:23:21.833847+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: audit 2026-03-10T11:23:21.839342+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:22 vm05 bash[17453]: audit 2026-03-10T11:23:22.111375+0000 mgr.y (mgr.14120) 7 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: Generating public/private rsa key pair. 2026-03-10T11:23:22.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: Your identification has been saved in /tmp/tmpu5pa0e0k/key. 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: Your public key has been saved in /tmp/tmpu5pa0e0k/key.pub. 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: The key fingerprint is: 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: SHA256:ypL8GQRWQ2IysrBQlZM0ny139HtoWE2Is8pLZiVgMV8 ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: The key's randomart image is: 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: +---[RSA 3072]----+ 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: |+.+o*+*. E. .. | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: |o+ ++=o* oo..o | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: |o oo+.+ .oo . | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: | . . o..oo o | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: | ..S+. + . | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: | . + .* . . | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: | + ++ . | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: | o o. | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: | o | 2026-03-10T11:23:22.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:22 vm05 bash[17722]: +----[SHA256]-----+ 2026-03-10T11:23:22.975 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOfDH30xfaCFeOq+5EWl1iLXU0I7IvWkvwwEO4LzRUoi6d8jLVyXX8c+mOrtiQ7WiAJNEvR60/ShIrNmWEtxSEvRexDJBD8jOS76U9gNgXK3pgiK397TBO1PqMJd0KY52lDAtUna5eJ+Dh/5LY3voKxdifkgIkBckg91kL21UBQDpfEk9886rzW3495/J3fwM5BIeAnDNh12yWDOlZyYAAotIH2FOjyPa8zb0f7Duvmrj8iNhScx5/i0mqTcUdIdwCPk5WwKsYdCmWXI//BJIa3mA5bf7UW4Oy/TQLJl6LUdfEpOh/OoHNPosEPvftjfMvwzOmLssvFDtmz2PQkfiH9MiF6+y+Bxy9ZQONcaWAp2qXsSvMIM0cr2/kPlUVzeA4y/3Fhbdooo2xme4WBit7q97nz+/hhDGtK5swz0uqdxtAT0UmeOBgfoJY5+Y4M+hghvJlhx1VRviA+Hyl4P9hDOYq/D/ssr0PttEQ+F9E4k0fy/JJUnPawvu0siZVGM= ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:23.086 INFO:teuthology.orchestra.run.vm05.stderr:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-10T11:23:23.086 INFO:teuthology.orchestra.run.vm05.stderr:Adding key to root@localhost authorized_keys... 2026-03-10T11:23:23.086 INFO:teuthology.orchestra.run.vm05.stderr:Adding host vm05... 2026-03-10T11:23:23.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:23 vm05 bash[17453]: audit 2026-03-10T11:23:22.428271+0000 mgr.y (mgr.14120) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:23.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:23 vm05 bash[17453]: cephadm 2026-03-10T11:23:22.428501+0000 mgr.y (mgr.14120) 9 : cephadm [INF] Generating ssh key... 2026-03-10T11:23:23.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:23 vm05 bash[17453]: audit 2026-03-10T11:23:22.591741+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:23.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:23 vm05 bash[17453]: audit 2026-03-10T11:23:22.594356+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:23.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:23 vm05 bash[17453]: audit 2026-03-10T11:23:22.972822+0000 mgr.y (mgr.14120) 10 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:23.827 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:23 vm05 bash[17453]: cluster 2026-03-10T11:23:23.235301+0000 mon.a (mon.0) 56 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-10T11:23:24.078 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: Added host 'vm05' with addr '192.168.123.105' 2026-03-10T11:23:24.128 INFO:teuthology.orchestra.run.vm05.stderr:Deploying unmanaged mon service... 2026-03-10T11:23:24.399 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: Scheduled mon update... 2026-03-10T11:23:24.444 INFO:teuthology.orchestra.run.vm05.stderr:Deploying unmanaged mgr service... 2026-03-10T11:23:24.761 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: Scheduled mgr update... 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: audit 2026-03-10T11:23:23.619097+0000 mgr.y (mgr.14120) 11 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm05", "addr": "192.168.123.105", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: cephadm 2026-03-10T11:23:23.818243+0000 mgr.y (mgr.14120) 12 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: audit 2026-03-10T11:23:24.073356+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: cephadm 2026-03-10T11:23:24.073941+0000 mgr.y (mgr.14120) 13 : cephadm [INF] Added host vm05 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: audit 2026-03-10T11:23:24.111121+0000 mon.a (mon.0) 58 : audit [DBG] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: audit 2026-03-10T11:23:24.395488+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: audit 2026-03-10T11:23:24.757801+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:25 vm05 bash[17453]: audit 2026-03-10T11:23:25.048718+0000 mon.a (mon.0) 61 : audit [INF] from='client.? 192.168.123.105:0/3706425948' entity='client.admin' 2026-03-10T11:23:25.463 INFO:teuthology.orchestra.run.vm05.stderr:Enabling the dashboard module... 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: audit 2026-03-10T11:23:24.391691+0000 mgr.y (mgr.14120) 14 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: cephadm 2026-03-10T11:23:24.392677+0000 mgr.y (mgr.14120) 15 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: audit 2026-03-10T11:23:24.754225+0000 mgr.y (mgr.14120) 16 : audit [DBG] from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: cephadm 2026-03-10T11:23:24.754951+0000 mgr.y (mgr.14120) 17 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: audit 2026-03-10T11:23:25.359563+0000 mon.a (mon.0) 62 : audit [INF] from='client.? 192.168.123.105:0/3691267811' entity='client.admin' 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: audit 2026-03-10T11:23:25.616383+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: audit 2026-03-10T11:23:25.763055+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14120 192.168.123.105:0/1806098670' entity='mgr.y' 2026-03-10T11:23:26.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:26 vm05 bash[17453]: audit 2026-03-10T11:23:25.782345+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.105:0/210576708' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-10T11:23:27.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:26 vm05 bash[17722]: ignoring --setuser ceph since I am not root 2026-03-10T11:23:27.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:26 vm05 bash[17722]: ignoring --setgroup ceph since I am not root 2026-03-10T11:23:27.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:26 vm05 bash[17722]: debug 2026-03-10T11:23:26.932+0000 7fb685c94000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:23:27.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:26 vm05 bash[17722]: debug 2026-03-10T11:23:26.988+0000 7fb685c94000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:27.118 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:27.119 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "epoch": 9, 2026-03-10T11:23:27.119 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "available": true, 2026-03-10T11:23:27.119 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "active_name": "y", 2026-03-10T11:23:27.119 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "num_standby": 0 2026-03-10T11:23:27.119 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:27.171 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for the mgr to restart... 2026-03-10T11:23:27.171 INFO:teuthology.orchestra.run.vm05.stderr:Waiting for mgr epoch 9... 2026-03-10T11:23:27.597 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:27 vm05 bash[17722]: debug 2026-03-10T11:23:27.352+0000 7fb685c94000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:23:28.019 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:27 vm05 bash[17453]: audit 2026-03-10T11:23:26.766646+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.105:0/210576708' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-10T11:23:28.019 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:27 vm05 bash[17453]: cluster 2026-03-10T11:23:26.766703+0000 mon.a (mon.0) 67 : cluster [DBG] mgrmap e9: y(active, since 6s) 2026-03-10T11:23:28.019 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:27 vm05 bash[17453]: audit 2026-03-10T11:23:27.115939+0000 mon.a (mon.0) 68 : audit [DBG] from='client.? 192.168.123.105:0/3253792627' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-10T11:23:28.019 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:27 vm05 bash[17722]: debug 2026-03-10T11:23:27.884+0000 7fb685c94000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:23:28.019 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:27 vm05 bash[17722]: debug 2026-03-10T11:23:27.980+0000 7fb685c94000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:23:28.316 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:28 vm05 bash[17722]: debug 2026-03-10T11:23:28.204+0000 7fb685c94000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:23:28.584 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:28 vm05 bash[17722]: debug 2026-03-10T11:23:28.308+0000 7fb685c94000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:23:28.584 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:28 vm05 bash[17722]: debug 2026-03-10T11:23:28.364+0000 7fb685c94000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:23:28.584 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:28 vm05 bash[17722]: debug 2026-03-10T11:23:28.512+0000 7fb685c94000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:23:28.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:28 vm05 bash[17722]: debug 2026-03-10T11:23:28.576+0000 7fb685c94000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:23:28.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:28 vm05 bash[17722]: debug 2026-03-10T11:23:28.644+0000 7fb685c94000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:29.568 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.184+0000 7fb685c94000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:23:29.568 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.240+0000 7fb685c94000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:23:29.568 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.296+0000 7fb685c94000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:23:29.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.648+0000 7fb685c94000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:23:29.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.708+0000 7fb685c94000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:23:29.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.768+0000 7fb685c94000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:23:30.195 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:29 vm05 bash[17722]: debug 2026-03-10T11:23:29.868+0000 7fb685c94000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:30.500 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:30 vm05 bash[17722]: debug 2026-03-10T11:23:30.188+0000 7fb685c94000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:23:30.500 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:30 vm05 bash[17722]: debug 2026-03-10T11:23:30.368+0000 7fb685c94000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:23:30.500 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:30 vm05 bash[17722]: debug 2026-03-10T11:23:30.428+0000 7fb685c94000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:23:30.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:30 vm05 bash[17722]: debug 2026-03-10T11:23:30.492+0000 7fb685c94000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:23:30.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:30 vm05 bash[17722]: debug 2026-03-10T11:23:30.636+0000 7fb685c94000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:31.469 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:31 vm05 bash[17453]: cluster 2026-03-10T11:23:31.132301+0000 mon.a (mon.0) 69 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:23:31.469 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:31 vm05 bash[17453]: cluster 2026-03-10T11:23:31.133450+0000 mon.a (mon.0) 70 : cluster [INF] Activating manager daemon y 2026-03-10T11:23:31.469 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:31 vm05 bash[17453]: cluster 2026-03-10T11:23:31.135487+0000 mon.a (mon.0) 71 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-10T11:23:31.470 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:31 vm05 bash[17722]: debug 2026-03-10T11:23:31.124+0000 7fb685c94000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:23:31.470 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:31 vm05 bash[17722]: [10/Mar/2026:11:23:31] ENGINE Bus STARTING 2026-03-10T11:23:31.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:31 vm05 bash[17722]: [10/Mar/2026:11:23:31] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:23:31.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:31 vm05 bash[17722]: [10/Mar/2026:11:23:31] ENGINE Bus STARTED 2026-03-10T11:23:32.210 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: { 2026-03-10T11:23:32.211 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "mgrmap_epoch": 11, 2026-03-10T11:23:32.211 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: "initialized": true 2026-03-10T11:23:32.211 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: } 2026-03-10T11:23:32.265 INFO:teuthology.orchestra.run.vm05.stderr:mgr epoch 9 is available 2026-03-10T11:23:32.265 INFO:teuthology.orchestra.run.vm05.stderr:Generating a dashboard self-signed certificate... 2026-03-10T11:23:32.551 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: Self-signed certificate created 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: cluster 2026-03-10T11:23:31.188705+0000 mon.a (mon.0) 72 : cluster [DBG] mgrmap e10: y(active, starting, since 0.0553932s) 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.195732+0000 mon.a (mon.0) 73 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.196198+0000 mon.a (mon.0) 74 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.197577+0000 mon.a (mon.0) 75 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.197665+0000 mon.a (mon.0) 76 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.197841+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: cluster 2026-03-10T11:23:31.204000+0000 mon.a (mon.0) 78 : cluster [INF] Manager daemon y is now available 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.223316+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.224684+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.230863+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.253481+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: cephadm 2026-03-10T11:23:31.466905+0000 mgr.y (mgr.14152) 1 : cephadm [INF] [10/Mar/2026:11:23:31] ENGINE Bus STARTING 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: cephadm 2026-03-10T11:23:31.580043+0000 mgr.y (mgr.14152) 2 : cephadm [INF] [10/Mar/2026:11:23:31] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: cephadm 2026-03-10T11:23:31.580401+0000 mgr.y (mgr.14152) 3 : cephadm [INF] [10/Mar/2026:11:23:31] ENGINE Bus STARTED 2026-03-10T11:23:32.563 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:32 vm05 bash[17453]: audit 2026-03-10T11:23:31.584809+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:32.594 INFO:teuthology.orchestra.run.vm05.stderr:Creating initial admin user... 2026-03-10T11:23:33.006 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: {"username": "admin", "password": "$2b$12$YLCnVCJyHuOeV53x4P5dPuQD5uMUVmG7Yux0g/AOMi9M9wQ/5SSWO", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773141812, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-10T11:23:33.046 INFO:teuthology.orchestra.run.vm05.stderr:Fetching dashboard port number... 2026-03-10T11:23:33.286 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: 8443 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: cluster 2026-03-10T11:23:32.197519+0000 mon.a (mon.0) 84 : cluster [DBG] mgrmap e11: y(active, since 1.06421s) 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: audit 2026-03-10T11:23:32.199179+0000 mgr.y (mgr.14152) 4 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: audit 2026-03-10T11:23:32.204848+0000 mgr.y (mgr.14152) 5 : audit [DBG] from='client.14156 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: audit 2026-03-10T11:23:32.512492+0000 mgr.y (mgr.14152) 6 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: audit 2026-03-10T11:23:32.545475+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: audit 2026-03-10T11:23:32.548759+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:33.298 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:33 vm05 bash[17453]: audit 2026-03-10T11:23:33.003078+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:33.341 INFO:teuthology.orchestra.run.vm05.stderr:firewalld does not appear to be present 2026-03-10T11:23:33.341 INFO:teuthology.orchestra.run.vm05.stderr:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr:Ceph Dashboard is now available at: 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr: URL: https://vm05.local:8443/ 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr: User: admin 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr: Password: 5vnwzhty60 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.343 INFO:teuthology.orchestra.run.vm05.stderr:Enabling autotune for osd_memory_target 2026-03-10T11:23:33.946 INFO:teuthology.orchestra.run.vm05.stderr:/usr/bin/ceph: set mgr/dashboard/cluster/status 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr:You can access the Ceph CLI with: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: sudo /home/ubuntu/cephtest/cephadm shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr:Please consider enabling telemetry to help improve Ceph: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: ceph telemetry on 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr:For more information see: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: https://docs.ceph.com/docs/master/mgr/telemetry/ 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:23:33.984 INFO:teuthology.orchestra.run.vm05.stderr:Bootstrap complete. 2026-03-10T11:23:34.004 INFO:tasks.cephadm:Fetching config... 2026-03-10T11:23:34.004 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:34.004 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-10T11:23:34.009 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-10T11:23:34.009 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:34.009 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-10T11:23:34.056 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-10T11:23:34.056 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:34.056 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.a/keyring of=/dev/stdout 2026-03-10T11:23:34.105 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-10T11:23:34.105 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:34.105 DEBUG:teuthology.orchestra.run.vm05:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-10T11:23:34.157 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-10T11:23:34.158 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOfDH30xfaCFeOq+5EWl1iLXU0I7IvWkvwwEO4LzRUoi6d8jLVyXX8c+mOrtiQ7WiAJNEvR60/ShIrNmWEtxSEvRexDJBD8jOS76U9gNgXK3pgiK397TBO1PqMJd0KY52lDAtUna5eJ+Dh/5LY3voKxdifkgIkBckg91kL21UBQDpfEk9886rzW3495/J3fwM5BIeAnDNh12yWDOlZyYAAotIH2FOjyPa8zb0f7Duvmrj8iNhScx5/i0mqTcUdIdwCPk5WwKsYdCmWXI//BJIa3mA5bf7UW4Oy/TQLJl6LUdfEpOh/OoHNPosEPvftjfMvwzOmLssvFDtmz2PQkfiH9MiF6+y+Bxy9ZQONcaWAp2qXsSvMIM0cr2/kPlUVzeA4y/3Fhbdooo2xme4WBit7q97nz+/hhDGtK5swz0uqdxtAT0UmeOBgfoJY5+Y4M+hghvJlhx1VRviA+Hyl4P9hDOYq/D/ssr0PttEQ+F9E4k0fy/JJUnPawvu0siZVGM= ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T11:23:34.214 INFO:teuthology.orchestra.run.vm05.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOfDH30xfaCFeOq+5EWl1iLXU0I7IvWkvwwEO4LzRUoi6d8jLVyXX8c+mOrtiQ7WiAJNEvR60/ShIrNmWEtxSEvRexDJBD8jOS76U9gNgXK3pgiK397TBO1PqMJd0KY52lDAtUna5eJ+Dh/5LY3voKxdifkgIkBckg91kL21UBQDpfEk9886rzW3495/J3fwM5BIeAnDNh12yWDOlZyYAAotIH2FOjyPa8zb0f7Duvmrj8iNhScx5/i0mqTcUdIdwCPk5WwKsYdCmWXI//BJIa3mA5bf7UW4Oy/TQLJl6LUdfEpOh/OoHNPosEPvftjfMvwzOmLssvFDtmz2PQkfiH9MiF6+y+Bxy9ZQONcaWAp2qXsSvMIM0cr2/kPlUVzeA4y/3Fhbdooo2xme4WBit7q97nz+/hhDGtK5swz0uqdxtAT0UmeOBgfoJY5+Y4M+hghvJlhx1VRviA+Hyl4P9hDOYq/D/ssr0PttEQ+F9E4k0fy/JJUnPawvu0siZVGM= ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:34.221 DEBUG:teuthology.orchestra.run.vm07:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOfDH30xfaCFeOq+5EWl1iLXU0I7IvWkvwwEO4LzRUoi6d8jLVyXX8c+mOrtiQ7WiAJNEvR60/ShIrNmWEtxSEvRexDJBD8jOS76U9gNgXK3pgiK397TBO1PqMJd0KY52lDAtUna5eJ+Dh/5LY3voKxdifkgIkBckg91kL21UBQDpfEk9886rzW3495/J3fwM5BIeAnDNh12yWDOlZyYAAotIH2FOjyPa8zb0f7Duvmrj8iNhScx5/i0mqTcUdIdwCPk5WwKsYdCmWXI//BJIa3mA5bf7UW4Oy/TQLJl6LUdfEpOh/OoHNPosEPvftjfMvwzOmLssvFDtmz2PQkfiH9MiF6+y+Bxy9ZQONcaWAp2qXsSvMIM0cr2/kPlUVzeA4y/3Fhbdooo2xme4WBit7q97nz+/hhDGtK5swz0uqdxtAT0UmeOBgfoJY5+Y4M+hghvJlhx1VRviA+Hyl4P9hDOYq/D/ssr0PttEQ+F9E4k0fy/JJUnPawvu0siZVGM= ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-10T11:23:34.233 INFO:teuthology.orchestra.run.vm07.stdout:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxOfDH30xfaCFeOq+5EWl1iLXU0I7IvWkvwwEO4LzRUoi6d8jLVyXX8c+mOrtiQ7WiAJNEvR60/ShIrNmWEtxSEvRexDJBD8jOS76U9gNgXK3pgiK397TBO1PqMJd0KY52lDAtUna5eJ+Dh/5LY3voKxdifkgIkBckg91kL21UBQDpfEk9886rzW3495/J3fwM5BIeAnDNh12yWDOlZyYAAotIH2FOjyPa8zb0f7Duvmrj8iNhScx5/i0mqTcUdIdwCPk5WwKsYdCmWXI//BJIa3mA5bf7UW4Oy/TQLJl6LUdfEpOh/OoHNPosEPvftjfMvwzOmLssvFDtmz2PQkfiH9MiF6+y+Bxy9ZQONcaWAp2qXsSvMIM0cr2/kPlUVzeA4y/3Fhbdooo2xme4WBit7q97nz+/hhDGtK5swz0uqdxtAT0UmeOBgfoJY5+Y4M+hghvJlhx1VRviA+Hyl4P9hDOYq/D/ssr0PttEQ+F9E4k0fy/JJUnPawvu0siZVGM= ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:34.239 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-10T11:23:34.275 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:34 vm05 bash[17453]: audit 2026-03-10T11:23:32.845164+0000 mgr.y (mgr.14152) 7 : audit [DBG] from='client.14166 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:34.275 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:34 vm05 bash[17453]: audit 2026-03-10T11:23:33.283783+0000 mon.a (mon.0) 88 : audit [DBG] from='client.? 192.168.123.105:0/3693903804' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-10T11:23:34.275 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:34 vm05 bash[17453]: audit 2026-03-10T11:23:33.940196+0000 mon.a (mon.0) 89 : audit [INF] from='client.? 192.168.123.105:0/2544116443' entity='client.admin' 2026-03-10T11:23:34.275 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:34 vm05 bash[17453]: cluster 2026-03-10T11:23:34.006239+0000 mon.a (mon.0) 90 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-10T11:23:34.853 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-10T11:23:34.853 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-10T11:23:35.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:35 vm05 bash[17453]: audit 2026-03-10T11:23:34.283090+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:35.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:35 vm05 bash[17453]: audit 2026-03-10T11:23:34.636864+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:35.335 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:35 vm05 bash[17453]: audit 2026-03-10T11:23:34.793986+0000 mon.a (mon.0) 93 : audit [INF] from='client.? 192.168.123.105:0/2206824738' entity='client.admin' 2026-03-10T11:23:35.383 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm07 2026-03-10T11:23:35.383 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:23:35.383 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.conf 2026-03-10T11:23:35.386 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:23:35.386 DEBUG:teuthology.orchestra.run.vm07:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:35.432 INFO:tasks.cephadm:Adding host vm07 to orchestrator... 2026-03-10T11:23:35.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch host add vm07 2026-03-10T11:23:36.567 INFO:teuthology.orchestra.run.vm05.stdout:Added host 'vm07' with addr '192.168.123.107' 2026-03-10T11:23:36.578 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:36 vm05 bash[17453]: audit 2026-03-10T11:23:35.316663+0000 mgr.y (mgr.14152) 8 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:36.578 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:36 vm05 bash[17453]: audit 2026-03-10T11:23:35.320014+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:36.639 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch host ls --format=json 2026-03-10T11:23:37.077 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:23:37.077 INFO:teuthology.orchestra.run.vm05.stdout:[{"addr": "192.168.123.105", "hostname": "vm05", "labels": [], "status": ""}, {"addr": "192.168.123.107", "hostname": "vm07", "labels": [], "status": ""}] 2026-03-10T11:23:37.131 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-10T11:23:37.132 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd crush tunables default 2026-03-10T11:23:37.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:37 vm05 bash[17453]: audit 2026-03-10T11:23:35.869803+0000 mgr.y (mgr.14152) 9 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:37.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:37 vm05 bash[17453]: cephadm 2026-03-10T11:23:36.224187+0000 mgr.y (mgr.14152) 10 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T11:23:37.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:37 vm05 bash[17453]: audit 2026-03-10T11:23:36.563826+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:37.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:37 vm05 bash[17453]: cephadm 2026-03-10T11:23:36.564159+0000 mgr.y (mgr.14152) 11 : cephadm [INF] Added host vm07 2026-03-10T11:23:38.578 INFO:teuthology.orchestra.run.vm05.stderr:adjusted tunables profile to default 2026-03-10T11:23:38.650 INFO:tasks.cephadm:Adding mon.a on vm05 2026-03-10T11:23:38.650 INFO:tasks.cephadm:Adding mon.c on vm05 2026-03-10T11:23:38.650 INFO:tasks.cephadm:Adding mon.b on vm07 2026-03-10T11:23:38.650 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch apply mon '3;vm05:192.168.123.105=a;vm05:[v2:192.168.123.105:3301,v1:192.168.123.105:6790]=c;vm07:192.168.123.107=b' 2026-03-10T11:23:38.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:38 vm05 bash[17453]: audit 2026-03-10T11:23:37.074045+0000 mgr.y (mgr.14152) 12 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:23:38.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:38 vm05 bash[17453]: cluster 2026-03-10T11:23:37.569330+0000 mon.a (mon.0) 96 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-10T11:23:38.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:38 vm05 bash[17453]: audit 2026-03-10T11:23:37.635761+0000 mon.a (mon.0) 97 : audit [INF] from='client.? 192.168.123.105:0/4158852305' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-10T11:23:39.115 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mon update... 2026-03-10T11:23:39.192 DEBUG:teuthology.orchestra.run.vm05:mon.c> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.c.service 2026-03-10T11:23:39.193 DEBUG:teuthology.orchestra.run.vm07:mon.b> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.b.service 2026-03-10T11:23:39.194 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T11:23:39.194 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph mon dump -f json 2026-03-10T11:23:39.721 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:23:39.721 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":1,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","modified":"2026-03-10T11:23:05.182054Z","created":"2026-03-10T11:23:05.182054Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T11:23:39.724 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 1 2026-03-10T11:23:39.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.573250+0000 mon.a (mon.0) 98 : audit [INF] from='client.? 192.168.123.105:0/4158852305' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: cluster 2026-03-10T11:23:38.573378+0000 mon.a (mon.0) 99 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.953819+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.954301+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.956427+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.956961+0000 mon.a (mon.0) 103 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.957566+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:38.958007+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:39.074705+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:39.111761+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:39 vm05 bash[17453]: audit 2026-03-10T11:23:39.114900+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:40.792 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T11:23:40.792 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph mon dump -f json 2026-03-10T11:23:40.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: cephadm 2026-03-10T11:23:38.958623+0000 mgr.y (mgr.14152) 13 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:23:40.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: cephadm 2026-03-10T11:23:39.015191+0000 mgr.y (mgr.14152) 14 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:40.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: audit 2026-03-10T11:23:39.107263+0000 mgr.y (mgr.14152) 15 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm05:192.168.123.105=a;vm05:[v2:192.168.123.105:3301,v1:192.168.123.105:6790]=c;vm07:192.168.123.107=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:40.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: cephadm 2026-03-10T11:23:39.108638+0000 mgr.y (mgr.14152) 16 : cephadm [INF] Saving service mon spec with placement vm05:192.168.123.105=a;vm05:[v2:192.168.123.105:3301,v1:192.168.123.105:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T11:23:40.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: audit 2026-03-10T11:23:39.718724+0000 mon.a (mon.0) 109 : audit [DBG] from='client.? 192.168.123.107:0/3780628743' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:23:40.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: audit 2026-03-10T11:23:39.757339+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:40.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:40 vm05 bash[17453]: audit 2026-03-10T11:23:40.022880+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:41.287 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:23:41.287 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":1,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","modified":"2026-03-10T11:23:05.182054Z","created":"2026-03-10T11:23:05.182054Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T11:23:41.290 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 1 2026-03-10T11:23:41.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:41 vm05 bash[17453]: audit 2026-03-10T11:23:41.284386+0000 mon.a (mon.0) 112 : audit [DBG] from='client.? 192.168.123.107:0/726396929' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:23:42.336 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T11:23:42.337 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph mon dump -f json 2026-03-10T11:23:42.904 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:23:42.905 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":1,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","modified":"2026-03-10T11:23:05.182054Z","created":"2026-03-10T11:23:05.182054Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-10T11:23:42.907 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 1 2026-03-10T11:23:43.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:43.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:43.500 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:43 vm05 bash[22470]: debug 2026-03-10T11:23:43.364+0000 7f765d585700 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.317536+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.321275+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.321892+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.324900+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.325833+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.326274+0000 mon.a (mon.0) 118 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: cephadm 2026-03-10T11:23:42.326834+0000 mgr.y (mgr.14152) 17 : cephadm [INF] Deploying daemon mon.c on vm05 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:42.901988+0000 mon.a (mon.0) 119 : audit [DBG] from='client.? 192.168.123.107:0/304633570' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:43.234597+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:43.235481+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:43 vm05 bash[17453]: audit 2026-03-10T11:23:43.235981+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:43.501 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:23:43 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:43.955 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-10T11:23:43.955 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph mon dump -f json 2026-03-10T11:23:44.656 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:44 vm07 bash[17804]: audit 2026-03-10T11:23:43.235981+0000 mon.a (mon.0) 122 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:44.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:44 vm07 bash[17804]: debug 2026-03-10T11:23:44.650+0000 7f6935b99700 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-10T11:23:44.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:44 vm07 bash[17804]: debug 2026-03-10T11:23:44.654+0000 7f6935b99700 10 mon.b@-1(synchronizing) e2 handle_conf_change mon_allow_pool_delete,mon_cluster_log_to_file 2026-03-10T11:23:48.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cephadm 2026-03-10T11:23:43.236539+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:43.374935+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:43.375508+0000 mon.a (mon.0) 125 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:43.377173+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:44.370648+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:44.658521+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:45.370660+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:45.372787+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:45.658570+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:46.370843+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:46.658550+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:47.370883+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:47.659017+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:48.371220+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:48.379964+0000 mon.a (mon.0) 136 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:48.384167+0000 mon.a (mon.0) 137 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:48.384227+0000 mon.a (mon.0) 138 : cluster [DBG] fsmap 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:48.384255+0000 mon.a (mon.0) 139 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:48.384377+0000 mon.a (mon.0) 140 : cluster [DBG] mgrmap e13: y(active, since 17s) 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: cluster 2026-03-10T11:23:48.389154+0000 mon.a (mon.0) 141 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:48.393156+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:48.394362+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:48.395925+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:48 vm05 bash[22470]: audit 2026-03-10T11:23:48.396373+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cephadm 2026-03-10T11:23:43.236539+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:43.374935+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:43.375508+0000 mon.a (mon.0) 125 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:43.377173+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:44.370648+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:44.658521+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:45.370660+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:45.372787+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:45.658570+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:46.370843+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:46.658550+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:47.370883+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:47.659017+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:48.371220+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:48.379964+0000 mon.a (mon.0) 136 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:48.384167+0000 mon.a (mon.0) 137 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:48.384227+0000 mon.a (mon.0) 138 : cluster [DBG] fsmap 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:48.384255+0000 mon.a (mon.0) 139 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:48.384377+0000 mon.a (mon.0) 140 : cluster [DBG] mgrmap e13: y(active, since 17s) 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: cluster 2026-03-10T11:23:48.389154+0000 mon.a (mon.0) 141 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:48.393156+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:48.394362+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:48.395925+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:48.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:48 vm05 bash[17453]: audit 2026-03-10T11:23:48.396373+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:48.665118+0000 mon.a (mon.0) 147 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:48.667750+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:48.668110+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:48.668380+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:48.668815+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:49.659136+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:50.659119+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:51.198666+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:51.659397+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:52.659466+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:53.659757+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:53.666833+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:53.669969+0000 mon.a (mon.0) 157 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:53.670061+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:53.670136+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:53.670344+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e13: y(active, since 22s) 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: cluster 2026-03-10T11:23:53.676803+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:53.680372+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:53.684520+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:53 vm05 bash[17453]: audit 2026-03-10T11:23:53.687520+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:48.665118+0000 mon.a (mon.0) 147 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:48.667750+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:48.668110+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:48.668380+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:48.668815+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:49.659136+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:50.659119+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:51.198666+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:51.659397+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:52.659466+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:53.659757+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:53.666833+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:53.669969+0000 mon.a (mon.0) 157 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:53.670061+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:53.670136+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:53.670344+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e13: y(active, since 22s) 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: cluster 2026-03-10T11:23:53.676803+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:53.680372+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:53.684520+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:53.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:53 vm05 bash[22470]: audit 2026-03-10T11:23:53.687520+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.003 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:23:54.003 INFO:teuthology.orchestra.run.vm07.stdout:{"epoch":3,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","modified":"2026-03-10T11:23:48.660252Z","created":"2026-03-10T11:23:05.182054Z","min_mon_release":17,"min_mon_release_name":"quincy","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3300","nonce":0},{"type":"v1","addr":"192.168.123.105:6789","nonce":0}]},"addr":"192.168.123.105:6789/0","public_addr":"192.168.123.105:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:3301","nonce":0},{"type":"v1","addr":"192.168.123.105:6790","nonce":0}]},"addr":"192.168.123.105:6790/0","public_addr":"192.168.123.105:6790/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:3300","nonce":0},{"type":"v1","addr":"192.168.123.107:6789","nonce":0}]},"addr":"192.168.123.107:6789/0","public_addr":"192.168.123.107:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-10T11:23:54.006 INFO:teuthology.orchestra.run.vm07.stderr:dumped monmap epoch 3 2026-03-10T11:23:54.066 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-10T11:23:54.066 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph config generate-minimal-conf 2026-03-10T11:23:54.536 INFO:teuthology.orchestra.run.vm05.stdout:# minimal ceph.conf for 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:54.537 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-10T11:23:54.537 INFO:teuthology.orchestra.run.vm05.stdout: fsid = 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:23:54.537 INFO:teuthology.orchestra.run.vm05.stdout: mon_host = [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] 2026-03-10T11:23:54.781 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-10T11:23:54.781 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:54.781 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T11:23:54.788 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:54.788 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:54.805 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: cluster 2026-03-10T11:23:53.198928+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:54.805 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: cephadm 2026-03-10T11:23:53.680802+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: cephadm 2026-03-10T11:23:53.685497+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: cephadm 2026-03-10T11:23:53.750589+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:53.751738+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:53.809713+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:53.813865+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: cephadm 2026-03-10T11:23:53.814459+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:53.814631+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:53.815040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:53.815409+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: cephadm 2026-03-10T11:23:53.815885+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.001024+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.107:0/467699280' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.056721+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.058010+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.058825+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.059563+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.331961+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.333512+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.334311+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.335135+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.534172+0000 mon.a (mon.0) 180 : audit [DBG] from='client.? 192.168.123.105:0/778651669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:54 vm05 bash[22470]: audit 2026-03-10T11:23:54.659787+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: cluster 2026-03-10T11:23:53.198928+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: cephadm 2026-03-10T11:23:53.680802+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: cephadm 2026-03-10T11:23:53.685497+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: cephadm 2026-03-10T11:23:53.750589+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:53.751738+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:53.809713+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:53.813865+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: cephadm 2026-03-10T11:23:53.814459+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:53.814631+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:53.815040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:53.815409+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: cephadm 2026-03-10T11:23:53.815885+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.001024+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.107:0/467699280' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:23:54.806 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.056721+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.058010+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.058825+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.059563+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.331961+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.333512+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.334311+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.335135+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.534172+0000 mon.a (mon.0) 180 : audit [DBG] from='client.? 192.168.123.105:0/778651669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:54.807 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:54 vm05 bash[17453]: audit 2026-03-10T11:23:54.659787+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:54.813 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:23:54.813 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.conf 2026-03-10T11:23:54.820 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:23:54.820 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:54.867 INFO:tasks.cephadm:Adding mgr.y on vm05 2026-03-10T11:23:54.868 INFO:tasks.cephadm:Adding mgr.x on vm07 2026-03-10T11:23:54.868 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch apply mgr '2;vm05=y;vm07=x' 2026-03-10T11:23:55.326 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled mgr update... 2026-03-10T11:23:55.393 DEBUG:teuthology.orchestra.run.vm07:mgr.x> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x.service 2026-03-10T11:23:55.394 INFO:tasks.cephadm:Deploying OSDs... 2026-03-10T11:23:55.394 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:23:55.394 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T11:23:55.397 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:23:55.397 DEBUG:teuthology.orchestra.run.vm05:> ls /dev/[sv]d? 2026-03-10T11:23:55.440 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vda 2026-03-10T11:23:55.440 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdb 2026-03-10T11:23:55.440 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdc 2026-03-10T11:23:55.440 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vdd 2026-03-10T11:23:55.440 INFO:teuthology.orchestra.run.vm05.stdout:/dev/vde 2026-03-10T11:23:55.440 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T11:23:55.440 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T11:23:55.440 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdb 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdb 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 11:23:38.524471748 +0000 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 11:23:37.684471748 +0000 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 11:23:37.684471748 +0000 2026-03-10T11:23:55.484 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T11:23:55.484 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T11:23:55.532 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T11:23:55.532 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T11:23:55.532 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00018144 s, 2.8 MB/s 2026-03-10T11:23:55.533 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T11:23:55.578 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdc 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdc 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 11:23:38.620471748 +0000 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 11:23:37.688471748 +0000 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 11:23:37.688471748 +0000 2026-03-10T11:23:55.624 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T11:23:55.624 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T11:23:55.670 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cephadm 2026-03-10T11:23:43.236539+0000 mgr.y (mgr.14152) 18 : cephadm [INF] Deploying daemon mon.b on vm07 2026-03-10T11:23:55.672 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T11:23:55.672 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T11:23:55.672 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000131666 s, 3.9 MB/s 2026-03-10T11:23:55.672 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T11:23:55.717 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vdd 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vdd 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 11:23:38.712471748 +0000 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 11:23:37.680471748 +0000 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 11:23:37.680471748 +0000 2026-03-10T11:23:55.764 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T11:23:55.764 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T11:23:55.812 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T11:23:55.812 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T11:23:55.812 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000121977 s, 4.2 MB/s 2026-03-10T11:23:55.812 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T11:23:55.857 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vde 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vde 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-10 11:23:38.820471748 +0000 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-10 11:23:37.684471748 +0000 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-10 11:23:37.684471748 +0000 2026-03-10T11:23:55.904 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-10T11:23:55.904 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:43.374935+0000 mon.a (mon.0) 124 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:43.375508+0000 mon.a (mon.0) 125 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:43.377173+0000 mon.a (mon.0) 126 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:44.370648+0000 mon.a (mon.0) 127 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:44.658521+0000 mon.a (mon.0) 128 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:45.370660+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:45.372787+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:45.658570+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:46.370843+0000 mon.a (mon.0) 131 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:46.658550+0000 mon.a (mon.0) 132 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:47.370883+0000 mon.a (mon.0) 133 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:47.659017+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.371220+0000 mon.a (mon.0) 135 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.379964+0000 mon.a (mon.0) 136 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.384167+0000 mon.a (mon.0) 137 : cluster [DBG] monmap e2: 2 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.384227+0000 mon.a (mon.0) 138 : cluster [DBG] fsmap 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.384255+0000 mon.a (mon.0) 139 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.384377+0000 mon.a (mon.0) 140 : cluster [DBG] mgrmap e13: y(active, since 17s) 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.389154+0000 mon.a (mon.0) 141 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.393156+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.394362+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.395925+0000 mon.a (mon.0) 144 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.396373+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.665118+0000 mon.a (mon.0) 147 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:48.667750+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.668110+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.668380+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:48.668815+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:49.659136+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:50.659119+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:51.198666+0000 mgr.y (mgr.14152) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:51.659397+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.927 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:52.659466+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.659757+0000 mon.a (mon.0) 155 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.666833+0000 mon.a (mon.0) 156 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.669969+0000 mon.a (mon.0) 157 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.670061+0000 mon.a (mon.0) 158 : cluster [DBG] fsmap 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.670136+0000 mon.a (mon.0) 159 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.670344+0000 mon.a (mon.0) 160 : cluster [DBG] mgrmap e13: y(active, since 22s) 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.676803+0000 mon.a (mon.0) 161 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.680372+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.684520+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.687520+0000 mon.a (mon.0) 164 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cluster 2026-03-10T11:23:53.198928+0000 mgr.y (mgr.14152) 20 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cephadm 2026-03-10T11:23:53.680802+0000 mgr.y (mgr.14152) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cephadm 2026-03-10T11:23:53.685497+0000 mgr.y (mgr.14152) 22 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cephadm 2026-03-10T11:23:53.750589+0000 mgr.y (mgr.14152) 23 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.751738+0000 mon.a (mon.0) 165 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.809713+0000 mon.a (mon.0) 166 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.813865+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cephadm 2026-03-10T11:23:53.814459+0000 mgr.y (mgr.14152) 24 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.814631+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.815040+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:53.815409+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: cephadm 2026-03-10T11:23:53.815885+0000 mgr.y (mgr.14152) 25 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.001024+0000 mon.a (mon.0) 171 : audit [DBG] from='client.? 192.168.123.107:0/467699280' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.056721+0000 mon.a (mon.0) 172 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.058010+0000 mon.a (mon.0) 173 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.058825+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.059563+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.331961+0000 mon.a (mon.0) 176 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.333512+0000 mon.a (mon.0) 177 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.334311+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.335135+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.534172+0000 mon.a (mon.0) 180 : audit [DBG] from='client.? 192.168.123.105:0/778651669' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 bash[17804]: audit 2026-03-10T11:23:54.659787+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:55 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:55.928 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:55 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:23:55.954 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-10T11:23:55.954 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-10T11:23:55.954 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000131517 s, 3.9 MB/s 2026-03-10T11:23:55.955 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T11:23:56.005 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:23:56.005 DEBUG:teuthology.orchestra.run.vm07:> dd if=/scratch_devs of=/dev/stdout 2026-03-10T11:23:56.008 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:23:56.008 DEBUG:teuthology.orchestra.run.vm07:> ls /dev/[sv]d? 2026-03-10T11:23:56.057 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vda 2026-03-10T11:23:56.057 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdb 2026-03-10T11:23:56.057 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdc 2026-03-10T11:23:56.057 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vdd 2026-03-10T11:23:56.057 INFO:teuthology.orchestra.run.vm07.stdout:/dev/vde 2026-03-10T11:23:56.057 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-10T11:23:56.057 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-10T11:23:56.057 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdb 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdb 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 11:23:41.894874050 +0000 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 11:23:41.018874050 +0000 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 11:23:41.018874050 +0000 2026-03-10T11:23:56.104 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T11:23:56.104 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-10T11:23:56.153 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T11:23:56.153 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T11:23:56.153 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000128009 s, 4.0 MB/s 2026-03-10T11:23:56.157 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-10T11:23:56.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:55 vm07 systemd[1]: Started Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:23:56.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:56 vm07 bash[18531]: debug 2026-03-10T11:23:56.162+0000 7f293a911000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:23:56.206 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdc 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdc 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 11:23:41.990874050 +0000 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 11:23:41.014874050 +0000 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 11:23:41.014874050 +0000 2026-03-10T11:23:56.253 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T11:23:56.253 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-10T11:23:56.304 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T11:23:56.304 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T11:23:56.304 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000218318 s, 2.3 MB/s 2026-03-10T11:23:56.305 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-10T11:23:56.353 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vdd 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vdd 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 11:23:42.082874050 +0000 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 11:23:41.018874050 +0000 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 11:23:41.018874050 +0000 2026-03-10T11:23:56.400 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T11:23:56.400 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:50.662994+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.199165+0000 mgr.y (mgr.14152) 32 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: audit 2026-03-10T11:23:55.317691+0000 mgr.y (mgr.14152) 33 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm05=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cephadm 2026-03-10T11:23:55.318684+0000 mgr.y (mgr.14152) 34 : cephadm [INF] Saving service mgr spec with placement vm05=y;vm07=x;count:2 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cephadm 2026-03-10T11:23:55.347167+0000 mgr.y (mgr.14152) 35 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.673886+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.674211+0000 mon.a (mon.0) 199 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.674912+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.676252+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.679431+0000 mon.a (mon.0) 201 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.679470+0000 mon.a (mon.0) 202 : cluster [DBG] fsmap 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.679488+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.679595+0000 mon.a (mon.0) 204 : cluster [DBG] mgrmap e13: y(active, since 24s) 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: cluster 2026-03-10T11:23:55.684976+0000 mon.a (mon.0) 205 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: audit 2026-03-10T11:23:55.947651+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: audit 2026-03-10T11:23:55.952968+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: audit 2026-03-10T11:23:55.953649+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:56 vm07 bash[17804]: audit 2026-03-10T11:23:55.954041+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:56.447 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:56 vm07 bash[18531]: debug 2026-03-10T11:23:56.214+0000 7f293a911000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:56.448 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T11:23:56.448 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T11:23:56.448 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000132398 s, 3.9 MB/s 2026-03-10T11:23:56.448 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-10T11:23:56.497 DEBUG:teuthology.orchestra.run.vm07:> stat /dev/vde 2026-03-10T11:23:56.543 INFO:teuthology.orchestra.run.vm07.stdout: File: /dev/vde 2026-03-10T11:23:56.543 INFO:teuthology.orchestra.run.vm07.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-10T11:23:56.544 INFO:teuthology.orchestra.run.vm07.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-10T11:23:56.544 INFO:teuthology.orchestra.run.vm07.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-10T11:23:56.544 INFO:teuthology.orchestra.run.vm07.stdout:Access: 2026-03-10 11:23:42.178874050 +0000 2026-03-10T11:23:56.544 INFO:teuthology.orchestra.run.vm07.stdout:Modify: 2026-03-10 11:23:41.010874050 +0000 2026-03-10T11:23:56.544 INFO:teuthology.orchestra.run.vm07.stdout:Change: 2026-03-10 11:23:41.010874050 +0000 2026-03-10T11:23:56.544 INFO:teuthology.orchestra.run.vm07.stdout: Birth: - 2026-03-10T11:23:56.544 DEBUG:teuthology.orchestra.run.vm07:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-10T11:23:56.591 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records in 2026-03-10T11:23:56.591 INFO:teuthology.orchestra.run.vm07.stderr:1+0 records out 2026-03-10T11:23:56.591 INFO:teuthology.orchestra.run.vm07.stderr:512 bytes copied, 0.000475008 s, 1.1 MB/s 2026-03-10T11:23:56.592 DEBUG:teuthology.orchestra.run.vm07:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-10T11:23:56.642 INFO:tasks.cephadm:Deploying osd.0 on vm05 with /dev/vde... 2026-03-10T11:23:56.642 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vde 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:50.662994+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.199165+0000 mgr.y (mgr.14152) 32 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: audit 2026-03-10T11:23:55.317691+0000 mgr.y (mgr.14152) 33 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm05=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cephadm 2026-03-10T11:23:55.318684+0000 mgr.y (mgr.14152) 34 : cephadm [INF] Saving service mgr spec with placement vm05=y;vm07=x;count:2 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cephadm 2026-03-10T11:23:55.347167+0000 mgr.y (mgr.14152) 35 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.673886+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.674211+0000 mon.a (mon.0) 199 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.674912+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.676252+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.679431+0000 mon.a (mon.0) 201 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.679470+0000 mon.a (mon.0) 202 : cluster [DBG] fsmap 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.679488+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:56.648 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.679595+0000 mon.a (mon.0) 204 : cluster [DBG] mgrmap e13: y(active, since 24s) 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: cluster 2026-03-10T11:23:55.684976+0000 mon.a (mon.0) 205 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: audit 2026-03-10T11:23:55.947651+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: audit 2026-03-10T11:23:55.952968+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: audit 2026-03-10T11:23:55.953649+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:56 vm05 bash[17453]: audit 2026-03-10T11:23:55.954041+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:50.662994+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.199165+0000 mgr.y (mgr.14152) 32 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: audit 2026-03-10T11:23:55.317691+0000 mgr.y (mgr.14152) 33 : audit [DBG] from='client.14208 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm05=y;vm07=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cephadm 2026-03-10T11:23:55.318684+0000 mgr.y (mgr.14152) 34 : cephadm [INF] Saving service mgr spec with placement vm05=y;vm07=x;count:2 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cephadm 2026-03-10T11:23:55.347167+0000 mgr.y (mgr.14152) 35 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.673886+0000 mon.c (mon.1) 3 : cluster [INF] mon.c calling monitor election 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.674211+0000 mon.a (mon.0) 199 : cluster [INF] mon.a calling monitor election 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.674912+0000 mon.b (mon.2) 2 : cluster [INF] mon.b calling monitor election 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.676252+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.679431+0000 mon.a (mon.0) 201 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.679470+0000 mon.a (mon.0) 202 : cluster [DBG] fsmap 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.679488+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.679595+0000 mon.a (mon.0) 204 : cluster [DBG] mgrmap e13: y(active, since 24s) 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: cluster 2026-03-10T11:23:55.684976+0000 mon.a (mon.0) 205 : cluster [INF] overall HEALTH_OK 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: audit 2026-03-10T11:23:55.947651+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: audit 2026-03-10T11:23:55.952968+0000 mon.a (mon.0) 207 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: audit 2026-03-10T11:23:55.953649+0000 mon.a (mon.0) 208 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:56.649 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:56 vm05 bash[22470]: audit 2026-03-10T11:23:55.954041+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:56.697 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:56 vm07 bash[18531]: debug 2026-03-10T11:23:56.518+0000 7f293a911000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:23:57.135 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.026+0000 7f293a911000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:23:57.260 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:23:57.273 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm05:/dev/vde 2026-03-10T11:23:57.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.130+0000 7f293a911000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:23:57.429 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.314+0000 7f293a911000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:23:57.429 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:57 vm07 bash[17804]: audit 2026-03-10T11:23:56.659836+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:57.484 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:57 vm05 bash[22470]: audit 2026-03-10T11:23:56.659836+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:57.484 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:57 vm05 bash[17453]: audit 2026-03-10T11:23:56.659836+0000 mon.a (mon.0) 210 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:23:57.692 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.422+0000 7f293a911000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:23:57.692 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.478+0000 7f293a911000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:23:57.692 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.626+0000 7f293a911000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:23:57.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.686+0000 7f293a911000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:23:57.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:57 vm07 bash[18531]: debug 2026-03-10T11:23:57.758+0000 7f293a911000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:58 vm07 bash[17804]: cluster 2026-03-10T11:23:57.199366+0000 mgr.y (mgr.14152) 36 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:58 vm07 bash[17804]: audit 2026-03-10T11:23:57.739156+0000 mgr.y (mgr.14152) 37 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:58 vm07 bash[17804]: audit 2026-03-10T11:23:57.742842+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:58 vm07 bash[17804]: audit 2026-03-10T11:23:57.744667+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:58 vm07 bash[17804]: audit 2026-03-10T11:23:57.745130+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.274+0000 7f293a911000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.330+0000 7f293a911000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:23:58.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.394+0000 7f293a911000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:23:58.767 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:58 vm05 bash[17453]: cluster 2026-03-10T11:23:57.199366+0000 mgr.y (mgr.14152) 36 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:58.767 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:58 vm05 bash[17453]: audit 2026-03-10T11:23:57.739156+0000 mgr.y (mgr.14152) 37 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:58.767 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:58 vm05 bash[17453]: audit 2026-03-10T11:23:57.742842+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:23:58.767 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:58 vm05 bash[17453]: audit 2026-03-10T11:23:57.744667+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:23:58.767 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:58 vm05 bash[17453]: audit 2026-03-10T11:23:57.745130+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:58.768 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:58 vm05 bash[22470]: cluster 2026-03-10T11:23:57.199366+0000 mgr.y (mgr.14152) 36 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:23:58.768 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:58 vm05 bash[22470]: audit 2026-03-10T11:23:57.739156+0000 mgr.y (mgr.14152) 37 : audit [DBG] from='client.14220 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:23:58.768 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:58 vm05 bash[22470]: audit 2026-03-10T11:23:57.742842+0000 mon.a (mon.0) 211 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:23:58.768 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:58 vm05 bash[22470]: audit 2026-03-10T11:23:57.744667+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:23:58.768 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:58 vm05 bash[22470]: audit 2026-03-10T11:23:57.745130+0000 mon.a (mon.0) 213 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:59.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.738+0000 7f293a911000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:23:59.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.806+0000 7f293a911000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:23:59.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.866+0000 7f293a911000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:23:59.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:58 vm07 bash[18531]: debug 2026-03-10T11:23:58.954+0000 7f293a911000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:59.555 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:59 vm07 bash[18531]: debug 2026-03-10T11:23:59.262+0000 7f293a911000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:23:59.555 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:59 vm07 bash[18531]: debug 2026-03-10T11:23:59.434+0000 7f293a911000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:23:59.555 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:59 vm07 bash[18531]: debug 2026-03-10T11:23:59.490+0000 7f293a911000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:59 vm07 bash[18531]: debug 2026-03-10T11:23:59.550+0000 7f293a911000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:23:59 vm07 bash[18531]: debug 2026-03-10T11:23:59.690+0000 7f293a911000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:58.696140+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:58.752030+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:58.755721+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: cephadm 2026-03-10T11:23:58.756775+0000 mgr.y (mgr.14152) 38 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:58.757120+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:58.757625+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:58.757994+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: cephadm 2026-03-10T11:23:58.758448+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:59.016805+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:59.018663+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:59.019251+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:59.019636+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:23:59.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:23:59 vm07 bash[17804]: audit 2026-03-10T11:23:59.022944+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:58.696140+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:58.752030+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:58.755721+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: cephadm 2026-03-10T11:23:58.756775+0000 mgr.y (mgr.14152) 38 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:58.757120+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:58.757625+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:58.757994+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:00.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: cephadm 2026-03-10T11:23:58.758448+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:59.016805+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:59.018663+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:59.019251+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:59.019636+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:23:59 vm05 bash[22470]: audit 2026-03-10T11:23:59.022944+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:58.696140+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:58.752030+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:58.755721+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: cephadm 2026-03-10T11:23:58.756775+0000 mgr.y (mgr.14152) 38 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:58.757120+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:58.757625+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:58.757994+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: cephadm 2026-03-10T11:23:58.758448+0000 mgr.y (mgr.14152) 39 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:59.016805+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:59.018663+0000 mon.a (mon.0) 221 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:59.019251+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:59.019636+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:00.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:23:59 vm05 bash[17453]: audit 2026-03-10T11:23:59.022944+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:00.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:24:00 vm07 bash[18531]: debug 2026-03-10T11:24:00.186+0000 7f293a911000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:24:01.012 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:00 vm05 bash[22470]: cluster 2026-03-10T11:23:59.199522+0000 mgr.y (mgr.14152) 40 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:01.012 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:00 vm05 bash[22470]: cluster 2026-03-10T11:24:00.190003+0000 mon.a (mon.0) 225 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:24:01.012 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:00 vm05 bash[22470]: audit 2026-03-10T11:24:00.191433+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:00 vm05 bash[22470]: audit 2026-03-10T11:24:00.191716+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:00 vm05 bash[22470]: audit 2026-03-10T11:24:00.192350+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:00 vm05 bash[22470]: audit 2026-03-10T11:24:00.192581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:00 vm05 bash[17453]: cluster 2026-03-10T11:23:59.199522+0000 mgr.y (mgr.14152) 40 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:00 vm05 bash[17453]: cluster 2026-03-10T11:24:00.190003+0000 mon.a (mon.0) 225 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:00 vm05 bash[17453]: audit 2026-03-10T11:24:00.191433+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:00 vm05 bash[17453]: audit 2026-03-10T11:24:00.191716+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:00 vm05 bash[17453]: audit 2026-03-10T11:24:00.192350+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:24:01.013 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:00 vm05 bash[17453]: audit 2026-03-10T11:24:00.192581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:24:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:00 vm07 bash[17804]: cluster 2026-03-10T11:23:59.199522+0000 mgr.y (mgr.14152) 40 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:00 vm07 bash[17804]: cluster 2026-03-10T11:24:00.190003+0000 mon.a (mon.0) 225 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:24:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:00 vm07 bash[17804]: audit 2026-03-10T11:24:00.191433+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:24:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:00 vm07 bash[17804]: audit 2026-03-10T11:24:00.191716+0000 mon.a (mon.0) 227 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:24:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:00 vm07 bash[17804]: audit 2026-03-10T11:24:00.192350+0000 mon.a (mon.0) 228 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:24:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:00 vm07 bash[17804]: audit 2026-03-10T11:24:00.192581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.? 192.168.123.107:0/2445891741' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:24:02.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: audit 2026-03-10T11:24:00.904072+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.105:0/3583920005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]: dispatch 2026-03-10T11:24:02.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: audit 2026-03-10T11:24:00.904617+0000 mon.a (mon.0) 230 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]: dispatch 2026-03-10T11:24:02.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: audit 2026-03-10T11:24:00.909995+0000 mon.a (mon.0) 231 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]': finished 2026-03-10T11:24:02.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: cluster 2026-03-10T11:24:00.910126+0000 mon.a (mon.0) 232 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: audit 2026-03-10T11:24:00.910218+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: cluster 2026-03-10T11:24:01.038590+0000 mon.a (mon.0) 234 : cluster [DBG] mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: audit 2026-03-10T11:24:01.038736+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:01 vm05 bash[22470]: audit 2026-03-10T11:24:01.554501+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.105:0/3777587022' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: audit 2026-03-10T11:24:00.904072+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.105:0/3583920005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: audit 2026-03-10T11:24:00.904617+0000 mon.a (mon.0) 230 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: audit 2026-03-10T11:24:00.909995+0000 mon.a (mon.0) 231 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]': finished 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: cluster 2026-03-10T11:24:00.910126+0000 mon.a (mon.0) 232 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: audit 2026-03-10T11:24:00.910218+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: cluster 2026-03-10T11:24:01.038590+0000 mon.a (mon.0) 234 : cluster [DBG] mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: audit 2026-03-10T11:24:01.038736+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:24:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:01 vm05 bash[17453]: audit 2026-03-10T11:24:01.554501+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.105:0/3777587022' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: audit 2026-03-10T11:24:00.904072+0000 mon.c (mon.1) 4 : audit [INF] from='client.? 192.168.123.105:0/3583920005' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]: dispatch 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: audit 2026-03-10T11:24:00.904617+0000 mon.a (mon.0) 230 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]: dispatch 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: audit 2026-03-10T11:24:00.909995+0000 mon.a (mon.0) 231 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0992e6dc-d298-462b-bccd-b74959342712"}]': finished 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: cluster 2026-03-10T11:24:00.910126+0000 mon.a (mon.0) 232 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: audit 2026-03-10T11:24:00.910218+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: cluster 2026-03-10T11:24:01.038590+0000 mon.a (mon.0) 234 : cluster [DBG] mgrmap e14: y(active, since 29s), standbys: x 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: audit 2026-03-10T11:24:01.038736+0000 mon.a (mon.0) 235 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:24:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:01 vm07 bash[17804]: audit 2026-03-10T11:24:01.554501+0000 mon.b (mon.2) 3 : audit [DBG] from='client.? 192.168.123.105:0/3777587022' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:03.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:02 vm05 bash[22470]: cluster 2026-03-10T11:24:01.199718+0000 mgr.y (mgr.14152) 41 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:03.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:02 vm05 bash[17453]: cluster 2026-03-10T11:24:01.199718+0000 mgr.y (mgr.14152) 41 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:03.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:02 vm07 bash[17804]: cluster 2026-03-10T11:24:01.199718+0000 mgr.y (mgr.14152) 41 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:05.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:04 vm05 bash[17453]: cluster 2026-03-10T11:24:03.199951+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:05.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:04 vm05 bash[22470]: cluster 2026-03-10T11:24:03.199951+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:05.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:04 vm07 bash[17804]: cluster 2026-03-10T11:24:03.199951+0000 mgr.y (mgr.14152) 42 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:06.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:06 vm05 bash[17453]: cluster 2026-03-10T11:24:05.200200+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:06 vm05 bash[22470]: cluster 2026-03-10T11:24:05.200200+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:07.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:06 vm07 bash[17804]: cluster 2026-03-10T11:24:05.200200+0000 mgr.y (mgr.14152) 43 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:07 vm05 bash[22470]: audit 2026-03-10T11:24:07.052377+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:07 vm05 bash[22470]: audit 2026-03-10T11:24:07.052932+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:07 vm05 bash[17453]: audit 2026-03-10T11:24:07.052377+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:07 vm05 bash[17453]: audit 2026-03-10T11:24:07.052932+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:07.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:08.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:07 vm07 bash[17804]: audit 2026-03-10T11:24:07.052377+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:24:08.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:07 vm07 bash[17804]: audit 2026-03-10T11:24:07.052932+0000 mon.a (mon.0) 237 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:08.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:08.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:08.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:07 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:08 vm05 bash[22470]: cephadm 2026-03-10T11:24:07.053348+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:08 vm05 bash[22470]: cluster 2026-03-10T11:24:07.200396+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:08 vm05 bash[22470]: audit 2026-03-10T11:24:08.038034+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:08 vm05 bash[22470]: audit 2026-03-10T11:24:08.050980+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:08 vm05 bash[22470]: audit 2026-03-10T11:24:08.056015+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:08 vm05 bash[22470]: audit 2026-03-10T11:24:08.059136+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:08 vm05 bash[17453]: cephadm 2026-03-10T11:24:07.053348+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:08 vm05 bash[17453]: cluster 2026-03-10T11:24:07.200396+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:08 vm05 bash[17453]: audit 2026-03-10T11:24:08.038034+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:08 vm05 bash[17453]: audit 2026-03-10T11:24:08.050980+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:08 vm05 bash[17453]: audit 2026-03-10T11:24:08.056015+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:08 vm05 bash[17453]: audit 2026-03-10T11:24:08.059136+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:09.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:08 vm07 bash[17804]: cephadm 2026-03-10T11:24:07.053348+0000 mgr.y (mgr.14152) 44 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:24:09.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:08 vm07 bash[17804]: cluster 2026-03-10T11:24:07.200396+0000 mgr.y (mgr.14152) 45 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:09.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:08 vm07 bash[17804]: audit 2026-03-10T11:24:08.038034+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:09.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:08 vm07 bash[17804]: audit 2026-03-10T11:24:08.050980+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:09.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:08 vm07 bash[17804]: audit 2026-03-10T11:24:08.056015+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:09.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:08 vm07 bash[17804]: audit 2026-03-10T11:24:08.059136+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:10.986 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:10 vm05 bash[22470]: cluster 2026-03-10T11:24:09.201681+0000 mgr.y (mgr.14152) 46 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:10.986 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:10 vm05 bash[17453]: cluster 2026-03-10T11:24:09.201681+0000 mgr.y (mgr.14152) 46 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:11.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:10 vm07 bash[17804]: cluster 2026-03-10T11:24:09.201681+0000 mgr.y (mgr.14152) 46 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:11.426 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 0 on host 'vm05' 2026-03-10T11:24:11.484 DEBUG:teuthology.orchestra.run.vm05:osd.0> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.0.service 2026-03-10T11:24:11.485 INFO:tasks.cephadm:Deploying osd.1 on vm05 with /dev/vdd... 2026-03-10T11:24:11.485 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vdd 2026-03-10T11:24:12.133 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:24:12.146 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm05:/dev/vdd 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:10.930515+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:10.940198+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:11.184464+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:11.184995+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: cluster 2026-03-10T11:24:11.201908+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:11.419264+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:11.463230+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:11.465838+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:12.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:11 vm07 bash[17804]: audit 2026-03-10T11:24:11.466389+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:10.930515+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:10.940198+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:11.184464+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:11.184995+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: cluster 2026-03-10T11:24:11.201908+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:11.419264+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:11.463230+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:11.465838+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:12.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:11 vm05 bash[17453]: audit 2026-03-10T11:24:11.466389+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:10.930515+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:10.940198+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:11.184464+0000 mon.c (mon.1) 5 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:11.184995+0000 mon.a (mon.0) 244 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: cluster 2026-03-10T11:24:11.201908+0000 mgr.y (mgr.14152) 47 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:11.419264+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:11.463230+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:11.465838+0000 mon.a (mon.0) 247 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:12.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:11 vm05 bash[22470]: audit 2026-03-10T11:24:11.466389+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:11.952337+0000 mon.a (mon.0) 249 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: cluster 2026-03-10T11:24:11.952384+0000 mon.a (mon.0) 250 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:11.953528+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:11.953849+0000 mon.c (mon.1) 6 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:11.954220+0000 mon.a (mon.0) 252 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:12.764800+0000 mgr.y (mgr.14152) 48 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:13.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:12.766828+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:12.768742+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:12 vm05 bash[17453]: audit 2026-03-10T11:24:12.769405+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:11.952337+0000 mon.a (mon.0) 249 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: cluster 2026-03-10T11:24:11.952384+0000 mon.a (mon.0) 250 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:11.953528+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:11.953849+0000 mon.c (mon.1) 6 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:11.954220+0000 mon.a (mon.0) 252 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:12.764800+0000 mgr.y (mgr.14152) 48 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:12.766828+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:12.768742+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:12 vm05 bash[22470]: audit 2026-03-10T11:24:12.769405+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:13.098 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:12 vm05 bash[25160]: debug 2026-03-10T11:24:12.960+0000 7f8032a10700 -1 osd.0 0 waiting for initial osdmap 2026-03-10T11:24:13.098 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:12 vm05 bash[25160]: debug 2026-03-10T11:24:12.964+0000 7f802cba6700 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:11.952337+0000 mon.a (mon.0) 249 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: cluster 2026-03-10T11:24:11.952384+0000 mon.a (mon.0) 250 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:11.953528+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:11.953849+0000 mon.c (mon.1) 6 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:11.954220+0000 mon.a (mon.0) 252 : audit [INF] from='osd.0 ' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:12.764800+0000 mgr.y (mgr.14152) 48 : audit [DBG] from='client.14241 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:12.766828+0000 mon.a (mon.0) 253 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:12.768742+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:13.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:12 vm07 bash[17804]: audit 2026-03-10T11:24:12.769405+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:13 vm05 bash[17453]: audit 2026-03-10T11:24:12.954750+0000 mon.a (mon.0) 256 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:13 vm05 bash[17453]: cluster 2026-03-10T11:24:12.954805+0000 mon.a (mon.0) 257 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:13 vm05 bash[17453]: audit 2026-03-10T11:24:12.955458+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:13 vm05 bash[17453]: audit 2026-03-10T11:24:12.967369+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:13 vm05 bash[17453]: cluster 2026-03-10T11:24:13.202134+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:13 vm05 bash[22470]: audit 2026-03-10T11:24:12.954750+0000 mon.a (mon.0) 256 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:13 vm05 bash[22470]: cluster 2026-03-10T11:24:12.954805+0000 mon.a (mon.0) 257 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:24:14.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:13 vm05 bash[22470]: audit 2026-03-10T11:24:12.955458+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:14.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:13 vm05 bash[22470]: audit 2026-03-10T11:24:12.967369+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:14.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:13 vm05 bash[22470]: cluster 2026-03-10T11:24:13.202134+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:14.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:13 vm07 bash[17804]: audit 2026-03-10T11:24:12.954750+0000 mon.a (mon.0) 256 : audit [INF] from='osd.0 ' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:14.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:13 vm07 bash[17804]: cluster 2026-03-10T11:24:12.954805+0000 mon.a (mon.0) 257 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-10T11:24:14.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:13 vm07 bash[17804]: audit 2026-03-10T11:24:12.955458+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:14.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:13 vm07 bash[17804]: audit 2026-03-10T11:24:12.967369+0000 mon.a (mon.0) 259 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:14.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:13 vm07 bash[17804]: cluster 2026-03-10T11:24:13.202134+0000 mgr.y (mgr.14152) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-10T11:24:15.234 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:14 vm05 bash[17453]: cluster 2026-03-10T11:24:12.150419+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:14 vm05 bash[17453]: cluster 2026-03-10T11:24:12.150540+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:14 vm05 bash[17453]: cluster 2026-03-10T11:24:13.964128+0000 mon.a (mon.0) 260 : cluster [INF] osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335] boot 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:14 vm05 bash[17453]: cluster 2026-03-10T11:24:13.964244+0000 mon.a (mon.0) 261 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:14 vm05 bash[17453]: audit 2026-03-10T11:24:13.965274+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:14 vm05 bash[22470]: cluster 2026-03-10T11:24:12.150419+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:14 vm05 bash[22470]: cluster 2026-03-10T11:24:12.150540+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:14 vm05 bash[22470]: cluster 2026-03-10T11:24:13.964128+0000 mon.a (mon.0) 260 : cluster [INF] osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335] boot 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:14 vm05 bash[22470]: cluster 2026-03-10T11:24:13.964244+0000 mon.a (mon.0) 261 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:24:15.235 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:14 vm05 bash[22470]: audit 2026-03-10T11:24:13.965274+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:15.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:14 vm07 bash[17804]: cluster 2026-03-10T11:24:12.150419+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:15.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:14 vm07 bash[17804]: cluster 2026-03-10T11:24:12.150540+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:15.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:14 vm07 bash[17804]: cluster 2026-03-10T11:24:13.964128+0000 mon.a (mon.0) 260 : cluster [INF] osd.0 [v2:192.168.123.105:6802/2004210335,v1:192.168.123.105:6803/2004210335] boot 2026-03-10T11:24:15.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:14 vm07 bash[17804]: cluster 2026-03-10T11:24:13.964244+0000 mon.a (mon.0) 261 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-10T11:24:15.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:14 vm07 bash[17804]: audit 2026-03-10T11:24:13.965274+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:24:16.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:15 vm05 bash[22470]: cluster 2026-03-10T11:24:15.202390+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:16.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:15 vm05 bash[22470]: audit 2026-03-10T11:24:15.711048+0000 mon.a (mon.0) 263 : audit [INF] from='client.? 192.168.123.105:0/853811255' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9cbc5424-3289-45dc-8763-da809c9c9e84"}]: dispatch 2026-03-10T11:24:16.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:15 vm05 bash[22470]: audit 2026-03-10T11:24:15.715607+0000 mon.a (mon.0) 264 : audit [INF] from='client.? 192.168.123.105:0/853811255' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9cbc5424-3289-45dc-8763-da809c9c9e84"}]': finished 2026-03-10T11:24:16.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:15 vm05 bash[22470]: cluster 2026-03-10T11:24:15.715687+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e9: 2 total, 1 up, 2 in 2026-03-10T11:24:16.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:15 vm05 bash[22470]: audit 2026-03-10T11:24:15.715733+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:15 vm05 bash[17453]: cluster 2026-03-10T11:24:15.202390+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:15 vm05 bash[17453]: audit 2026-03-10T11:24:15.711048+0000 mon.a (mon.0) 263 : audit [INF] from='client.? 192.168.123.105:0/853811255' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9cbc5424-3289-45dc-8763-da809c9c9e84"}]: dispatch 2026-03-10T11:24:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:15 vm05 bash[17453]: audit 2026-03-10T11:24:15.715607+0000 mon.a (mon.0) 264 : audit [INF] from='client.? 192.168.123.105:0/853811255' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9cbc5424-3289-45dc-8763-da809c9c9e84"}]': finished 2026-03-10T11:24:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:15 vm05 bash[17453]: cluster 2026-03-10T11:24:15.715687+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e9: 2 total, 1 up, 2 in 2026-03-10T11:24:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:15 vm05 bash[17453]: audit 2026-03-10T11:24:15.715733+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:15 vm07 bash[17804]: cluster 2026-03-10T11:24:15.202390+0000 mgr.y (mgr.14152) 50 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:15 vm07 bash[17804]: audit 2026-03-10T11:24:15.711048+0000 mon.a (mon.0) 263 : audit [INF] from='client.? 192.168.123.105:0/853811255' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9cbc5424-3289-45dc-8763-da809c9c9e84"}]: dispatch 2026-03-10T11:24:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:15 vm07 bash[17804]: audit 2026-03-10T11:24:15.715607+0000 mon.a (mon.0) 264 : audit [INF] from='client.? 192.168.123.105:0/853811255' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9cbc5424-3289-45dc-8763-da809c9c9e84"}]': finished 2026-03-10T11:24:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:15 vm07 bash[17804]: cluster 2026-03-10T11:24:15.715687+0000 mon.a (mon.0) 265 : cluster [DBG] osdmap e9: 2 total, 1 up, 2 in 2026-03-10T11:24:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:15 vm07 bash[17804]: audit 2026-03-10T11:24:15.715733+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:16 vm05 bash[22470]: audit 2026-03-10T11:24:16.354458+0000 mon.b (mon.2) 4 : audit [DBG] from='client.? 192.168.123.105:0/1353682617' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:17.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:16 vm05 bash[17453]: audit 2026-03-10T11:24:16.354458+0000 mon.b (mon.2) 4 : audit [DBG] from='client.? 192.168.123.105:0/1353682617' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:17.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:16 vm07 bash[17804]: audit 2026-03-10T11:24:16.354458+0000 mon.b (mon.2) 4 : audit [DBG] from='client.? 192.168.123.105:0/1353682617' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:18.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:18 vm05 bash[22470]: cluster 2026-03-10T11:24:17.202628+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:18.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:18 vm05 bash[17453]: cluster 2026-03-10T11:24:17.202628+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:18.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:18 vm07 bash[17804]: cluster 2026-03-10T11:24:17.202628+0000 mgr.y (mgr.14152) 51 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:20.254 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:19 vm05 bash[17453]: cluster 2026-03-10T11:24:18.985521+0000 mon.a (mon.0) 267 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:24:20.254 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:19 vm05 bash[17453]: audit 2026-03-10T11:24:18.985607+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:20.254 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:19 vm05 bash[17453]: cluster 2026-03-10T11:24:19.202864+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:20.254 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:19 vm05 bash[22470]: cluster 2026-03-10T11:24:18.985521+0000 mon.a (mon.0) 267 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:24:20.254 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:19 vm05 bash[22470]: audit 2026-03-10T11:24:18.985607+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:20.254 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:19 vm05 bash[22470]: cluster 2026-03-10T11:24:19.202864+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:20.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:19 vm07 bash[17804]: cluster 2026-03-10T11:24:18.985521+0000 mon.a (mon.0) 267 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-10T11:24:20.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:19 vm07 bash[17804]: audit 2026-03-10T11:24:18.985607+0000 mon.a (mon.0) 268 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:20.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:19 vm07 bash[17804]: cluster 2026-03-10T11:24:19.202864+0000 mgr.y (mgr.14152) 52 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:21 vm05 bash[22470]: cephadm 2026-03-10T11:24:20.320436+0000 mgr.y (mgr.14152) 53 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:21 vm05 bash[22470]: audit 2026-03-10T11:24:20.326635+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:21 vm05 bash[22470]: audit 2026-03-10T11:24:20.327751+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:21 vm05 bash[22470]: audit 2026-03-10T11:24:20.332011+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:21 vm05 bash[17453]: cephadm 2026-03-10T11:24:20.320436+0000 mgr.y (mgr.14152) 53 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:21 vm05 bash[17453]: audit 2026-03-10T11:24:20.326635+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:21 vm05 bash[17453]: audit 2026-03-10T11:24:20.327751+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:21.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:21 vm05 bash[17453]: audit 2026-03-10T11:24:20.332011+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:21 vm07 bash[17804]: cephadm 2026-03-10T11:24:20.320436+0000 mgr.y (mgr.14152) 53 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:24:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:21 vm07 bash[17804]: audit 2026-03-10T11:24:20.326635+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:21 vm07 bash[17804]: audit 2026-03-10T11:24:20.327751+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:21 vm07 bash[17804]: audit 2026-03-10T11:24:20.332011+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:22.368 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:22 vm05 bash[22470]: cluster 2026-03-10T11:24:21.203151+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:22.368 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:22 vm05 bash[22470]: audit 2026-03-10T11:24:21.908644+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:24:22.368 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:22 vm05 bash[22470]: audit 2026-03-10T11:24:21.909084+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:22.368 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:22 vm05 bash[17453]: cluster 2026-03-10T11:24:21.203151+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:22.368 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:22 vm05 bash[17453]: audit 2026-03-10T11:24:21.908644+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:24:22.368 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:22 vm05 bash[17453]: audit 2026-03-10T11:24:21.909084+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:22.619 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.619 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.619 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.619 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:22 vm07 bash[17804]: cluster 2026-03-10T11:24:21.203151+0000 mgr.y (mgr.14152) 54 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:22.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:22 vm07 bash[17804]: audit 2026-03-10T11:24:21.908644+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:24:22.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:22 vm07 bash[17804]: audit 2026-03-10T11:24:21.909084+0000 mon.a (mon.0) 273 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:22.876 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.876 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.876 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:22.876 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:23.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:23 vm07 bash[17804]: cephadm 2026-03-10T11:24:21.909396+0000 mgr.y (mgr.14152) 55 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:24:23.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:23 vm07 bash[17804]: audit 2026-03-10T11:24:22.857788+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:23.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:23 vm07 bash[17804]: audit 2026-03-10T11:24:22.858014+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:23.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:23 vm07 bash[17804]: audit 2026-03-10T11:24:22.864267+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:23.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:23 vm07 bash[17804]: audit 2026-03-10T11:24:22.865387+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:23 vm05 bash[22470]: cephadm 2026-03-10T11:24:21.909396+0000 mgr.y (mgr.14152) 55 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:23 vm05 bash[22470]: audit 2026-03-10T11:24:22.857788+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:23 vm05 bash[22470]: audit 2026-03-10T11:24:22.858014+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:23 vm05 bash[22470]: audit 2026-03-10T11:24:22.864267+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:23 vm05 bash[22470]: audit 2026-03-10T11:24:22.865387+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:23 vm05 bash[17453]: cephadm 2026-03-10T11:24:21.909396+0000 mgr.y (mgr.14152) 55 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:23 vm05 bash[17453]: audit 2026-03-10T11:24:22.857788+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:23 vm05 bash[17453]: audit 2026-03-10T11:24:22.858014+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:23 vm05 bash[17453]: audit 2026-03-10T11:24:22.864267+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:23.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:23 vm05 bash[17453]: audit 2026-03-10T11:24:22.865387+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:24.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:24 vm07 bash[17804]: cluster 2026-03-10T11:24:23.203385+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:24.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:24 vm05 bash[22470]: cluster 2026-03-10T11:24:23.203385+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:24.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:24 vm05 bash[17453]: cluster 2026-03-10T11:24:23.203385+0000 mgr.y (mgr.14152) 56 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:25.626 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:25 vm05 bash[22470]: audit 2026-03-10T11:24:25.355245+0000 mon.a (mon.0) 278 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:24:25.626 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:25 vm05 bash[22470]: audit 2026-03-10T11:24:25.356479+0000 mon.b (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:24:25.626 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:25 vm05 bash[17453]: audit 2026-03-10T11:24:25.355245+0000 mon.a (mon.0) 278 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:24:25.626 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:25 vm05 bash[17453]: audit 2026-03-10T11:24:25.356479+0000 mon.b (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:24:25.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:25 vm07 bash[17804]: audit 2026-03-10T11:24:25.355245+0000 mon.a (mon.0) 278 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:24:25.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:25 vm07 bash[17804]: audit 2026-03-10T11:24:25.356479+0000 mon.b (mon.2) 5 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:24:26.226 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 1 on host 'vm05' 2026-03-10T11:24:26.305 DEBUG:teuthology.orchestra.run.vm05:osd.1> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.1.service 2026-03-10T11:24:26.306 INFO:tasks.cephadm:Deploying osd.2 on vm05 with /dev/vdc... 2026-03-10T11:24:26.306 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vdc 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: cluster 2026-03-10T11:24:25.203679+0000 mgr.y (mgr.14152) 57 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:25.393828+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: cluster 2026-03-10T11:24:25.393946+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:25.394055+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:25.394947+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:25.396247+0000 mon.b (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:25.799724+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:25.804207+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.505 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:26.217619+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:26.220078+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:26.223425+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:26 vm05 bash[17453]: audit 2026-03-10T11:24:26.227855+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: cluster 2026-03-10T11:24:25.203679+0000 mgr.y (mgr.14152) 57 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:25.393828+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: cluster 2026-03-10T11:24:25.393946+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:25.394055+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:25.394947+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:25.396247+0000 mon.b (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:25.799724+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:25.804207+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:26.217619+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:26.220078+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:26.223425+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:26 vm05 bash[22470]: audit 2026-03-10T11:24:26.227855+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:26.506 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:24:26 vm05 bash[28302]: debug 2026-03-10T11:24:26.400+0000 7fa281c9c700 -1 osd.1 0 waiting for initial osdmap 2026-03-10T11:24:26.506 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:24:26 vm05 bash[28302]: debug 2026-03-10T11:24:26.408+0000 7fa27ce34700 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: cluster 2026-03-10T11:24:25.203679+0000 mgr.y (mgr.14152) 57 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:25.393828+0000 mon.a (mon.0) 279 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: cluster 2026-03-10T11:24:25.393946+0000 mon.a (mon.0) 280 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:25.394055+0000 mon.a (mon.0) 281 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:25.394947+0000 mon.a (mon.0) 282 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:25.396247+0000 mon.b (mon.2) 6 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:25.799724+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:25.804207+0000 mon.a (mon.0) 284 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:26.217619+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:26.220078+0000 mon.a (mon.0) 286 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:26.223425+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:26 vm07 bash[17804]: audit 2026-03-10T11:24:26.227855+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:26.961 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:24:26.969 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm05:/dev/vdc 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: audit 2026-03-10T11:24:26.394549+0000 mon.a (mon.0) 289 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: cluster 2026-03-10T11:24:26.394611+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: audit 2026-03-10T11:24:26.395543+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: audit 2026-03-10T11:24:26.403104+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: audit 2026-03-10T11:24:27.393550+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: audit 2026-03-10T11:24:27.394835+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:27 vm05 bash[22470]: audit 2026-03-10T11:24:27.395260+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: audit 2026-03-10T11:24:26.394549+0000 mon.a (mon.0) 289 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: cluster 2026-03-10T11:24:26.394611+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: audit 2026-03-10T11:24:26.395543+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: audit 2026-03-10T11:24:26.403104+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: audit 2026-03-10T11:24:27.393550+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: audit 2026-03-10T11:24:27.394835+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:27 vm05 bash[17453]: audit 2026-03-10T11:24:27.395260+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: audit 2026-03-10T11:24:26.394549+0000 mon.a (mon.0) 289 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: cluster 2026-03-10T11:24:26.394611+0000 mon.a (mon.0) 290 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: audit 2026-03-10T11:24:26.395543+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: audit 2026-03-10T11:24:26.403104+0000 mon.a (mon.0) 292 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: audit 2026-03-10T11:24:27.393550+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: audit 2026-03-10T11:24:27.394835+0000 mon.a (mon.0) 294 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:27 vm07 bash[17804]: audit 2026-03-10T11:24:27.395260+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: cluster 2026-03-10T11:24:26.315298+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: cluster 2026-03-10T11:24:26.315391+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: cluster 2026-03-10T11:24:27.203896+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: audit 2026-03-10T11:24:27.392023+0000 mgr.y (mgr.14152) 59 : audit [DBG] from='client.24139 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: audit 2026-03-10T11:24:27.405654+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: cluster 2026-03-10T11:24:27.421906+0000 mon.a (mon.0) 297 : cluster [INF] osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282] boot 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: cluster 2026-03-10T11:24:27.421990+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:24:28.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:28 vm07 bash[17804]: audit 2026-03-10T11:24:27.422441+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:28.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: cluster 2026-03-10T11:24:26.315298+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:28.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: cluster 2026-03-10T11:24:26.315391+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:28.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: cluster 2026-03-10T11:24:27.203896+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: audit 2026-03-10T11:24:27.392023+0000 mgr.y (mgr.14152) 59 : audit [DBG] from='client.24139 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: audit 2026-03-10T11:24:27.405654+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: cluster 2026-03-10T11:24:27.421906+0000 mon.a (mon.0) 297 : cluster [INF] osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282] boot 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: cluster 2026-03-10T11:24:27.421990+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:28 vm05 bash[22470]: audit 2026-03-10T11:24:27.422441+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: cluster 2026-03-10T11:24:26.315298+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: cluster 2026-03-10T11:24:26.315391+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: cluster 2026-03-10T11:24:27.203896+0000 mgr.y (mgr.14152) 58 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 4.8 MiB used, 20 GiB / 20 GiB avail 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: audit 2026-03-10T11:24:27.392023+0000 mgr.y (mgr.14152) 59 : audit [DBG] from='client.24139 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: audit 2026-03-10T11:24:27.405654+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: cluster 2026-03-10T11:24:27.421906+0000 mon.a (mon.0) 297 : cluster [INF] osd.1 [v2:192.168.123.105:6810/1089345282,v1:192.168.123.105:6811/1089345282] boot 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: cluster 2026-03-10T11:24:27.421990+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-10T11:24:28.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:28 vm05 bash[17453]: audit 2026-03-10T11:24:27.422441+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:24:29.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:29 vm07 bash[17804]: cluster 2026-03-10T11:24:28.422677+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:24:29.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:29 vm05 bash[22470]: cluster 2026-03-10T11:24:28.422677+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:24:29.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:29 vm05 bash[17453]: cluster 2026-03-10T11:24:28.422677+0000 mon.a (mon.0) 300 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-10T11:24:30.686 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:30 vm05 bash[17453]: cluster 2026-03-10T11:24:29.204138+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:30.686 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:30 vm05 bash[22470]: cluster 2026-03-10T11:24:29.204138+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:30 vm07 bash[17804]: cluster 2026-03-10T11:24:29.204138+0000 mgr.y (mgr.14152) 60 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 9.6 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:30.732387+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:30.734755+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:30.738674+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:31.239332+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:31.255015+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:31.518048+0000 mon.a (mon.0) 306 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:31.519164+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.105:0/1438932109' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:31.565459+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]': finished 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: cluster 2026-03-10T11:24:31.565522+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:31 vm05 bash[17453]: audit 2026-03-10T11:24:31.566413+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:30.732387+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:30.734755+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:30.738674+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:31.239332+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:31.255015+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:31.518048+0000 mon.a (mon.0) 306 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:31.519164+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.105:0/1438932109' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]: dispatch 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:31.565459+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]': finished 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: cluster 2026-03-10T11:24:31.565522+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T11:24:32.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:31 vm05 bash[22470]: audit 2026-03-10T11:24:31.566413+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:30.732387+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:30.734755+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:30.738674+0000 mon.a (mon.0) 303 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:31.239332+0000 mon.a (mon.0) 304 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:31.255015+0000 mon.a (mon.0) 305 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:31.518048+0000 mon.a (mon.0) 306 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]: dispatch 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:31.519164+0000 mon.b (mon.2) 7 : audit [INF] from='client.? 192.168.123.105:0/1438932109' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]: dispatch 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:31.565459+0000 mon.a (mon.0) 307 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "58079681-6944-4372-ab7d-0aa5717818bf"}]': finished 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: cluster 2026-03-10T11:24:31.565522+0000 mon.a (mon.0) 308 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-10T11:24:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:31 vm07 bash[17804]: audit 2026-03-10T11:24:31.566413+0000 mon.a (mon.0) 309 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:33.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:32 vm05 bash[17453]: cluster 2026-03-10T11:24:31.204422+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:33.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:32 vm05 bash[17453]: audit 2026-03-10T11:24:32.276899+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.105:0/201008611' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:33.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:32 vm05 bash[22470]: cluster 2026-03-10T11:24:31.204422+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:33.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:32 vm05 bash[22470]: audit 2026-03-10T11:24:32.276899+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.105:0/201008611' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:33.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:32 vm07 bash[17804]: cluster 2026-03-10T11:24:31.204422+0000 mgr.y (mgr.14152) 61 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:33.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:32 vm07 bash[17804]: audit 2026-03-10T11:24:32.276899+0000 mon.a (mon.0) 310 : audit [DBG] from='client.? 192.168.123.105:0/201008611' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:35.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:34 vm05 bash[22470]: cluster 2026-03-10T11:24:33.204781+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:35.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:34 vm05 bash[17453]: cluster 2026-03-10T11:24:33.204781+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:35.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:34 vm07 bash[17804]: cluster 2026-03-10T11:24:33.204781+0000 mgr.y (mgr.14152) 62 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 9.7 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:37.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:36 vm05 bash[22470]: cluster 2026-03-10T11:24:35.205105+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:37.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:36 vm05 bash[17453]: cluster 2026-03-10T11:24:35.205105+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:37.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:36 vm07 bash[17804]: cluster 2026-03-10T11:24:35.205105+0000 mgr.y (mgr.14152) 63 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:38.847 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:38 vm05 bash[22470]: cluster 2026-03-10T11:24:37.205327+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:38 vm05 bash[22470]: audit 2026-03-10T11:24:38.081307+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:38 vm05 bash[22470]: audit 2026-03-10T11:24:38.081954+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:38.848 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:38.848 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:38 vm05 bash[17453]: cluster 2026-03-10T11:24:37.205327+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:38 vm05 bash[17453]: audit 2026-03-10T11:24:38.081307+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:24:38.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:38 vm05 bash[17453]: audit 2026-03-10T11:24:38.081954+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:39.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:38 vm07 bash[17804]: cluster 2026-03-10T11:24:37.205327+0000 mgr.y (mgr.14152) 64 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:39.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:38 vm07 bash[17804]: audit 2026-03-10T11:24:38.081307+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:24:39.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:38 vm07 bash[17804]: audit 2026-03-10T11:24:38.081954+0000 mon.a (mon.0) 312 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:39.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:39.347 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:39.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:39.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:39.348 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:38 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:39.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:39 vm05 bash[22470]: cephadm 2026-03-10T11:24:38.082547+0000 mgr.y (mgr.14152) 65 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:24:39.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:39 vm05 bash[22470]: audit 2026-03-10T11:24:39.014020+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:39.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:39 vm05 bash[22470]: audit 2026-03-10T11:24:39.020318+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:39.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:39 vm05 bash[22470]: audit 2026-03-10T11:24:39.023849+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:39.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:39 vm05 bash[22470]: audit 2026-03-10T11:24:39.025119+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:39 vm05 bash[17453]: cephadm 2026-03-10T11:24:38.082547+0000 mgr.y (mgr.14152) 65 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:24:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:39 vm05 bash[17453]: audit 2026-03-10T11:24:39.014020+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:39 vm05 bash[17453]: audit 2026-03-10T11:24:39.020318+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:39 vm05 bash[17453]: audit 2026-03-10T11:24:39.023849+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:39 vm05 bash[17453]: audit 2026-03-10T11:24:39.025119+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:40.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:39 vm07 bash[17804]: cephadm 2026-03-10T11:24:38.082547+0000 mgr.y (mgr.14152) 65 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:24:40.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:39 vm07 bash[17804]: audit 2026-03-10T11:24:39.014020+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:40.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:39 vm07 bash[17804]: audit 2026-03-10T11:24:39.020318+0000 mon.a (mon.0) 314 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:40.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:39 vm07 bash[17804]: audit 2026-03-10T11:24:39.023849+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:40.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:39 vm07 bash[17804]: audit 2026-03-10T11:24:39.025119+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:41.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:40 vm05 bash[22470]: cluster 2026-03-10T11:24:39.205586+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:41.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:40 vm05 bash[17453]: cluster 2026-03-10T11:24:39.205586+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:40 vm07 bash[17804]: cluster 2026-03-10T11:24:39.205586+0000 mgr.y (mgr.14152) 66 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:42.474 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 2 on host 'vm05' 2026-03-10T11:24:42.550 DEBUG:teuthology.orchestra.run.vm05:osd.2> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.2.service 2026-03-10T11:24:42.551 INFO:tasks.cephadm:Deploying osd.3 on vm05 with /dev/vdb... 2026-03-10T11:24:42.551 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vdb 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: cluster 2026-03-10T11:24:41.205802+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.014872+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.022542+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.024768+0000 mon.a (mon.0) 318 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.171979+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.467785+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.488810+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.489635+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:42.793 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:42 vm07 bash[17804]: audit 2026-03-10T11:24:42.490295+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: cluster 2026-03-10T11:24:41.205802+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.014872+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.022542+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.024768+0000 mon.a (mon.0) 318 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.171979+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.467785+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.488810+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.489635+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:42 vm05 bash[22470]: audit 2026-03-10T11:24:42.490295+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: cluster 2026-03-10T11:24:41.205802+0000 mgr.y (mgr.14152) 67 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.014872+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.022542+0000 mon.c (mon.1) 7 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.024768+0000 mon.a (mon.0) 318 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.171979+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.467785+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.488810+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.489635+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:42.811 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:42 vm05 bash[17453]: audit 2026-03-10T11:24:42.490295+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:43.215 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:24:43.227 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm05:/dev/vdb 2026-03-10T11:24:44.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.019649+0000 mon.a (mon.0) 324 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: cluster 2026-03-10T11:24:43.019840+0000 mon.a (mon.0) 325 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.019955+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.023077+0000 mon.c (mon.1) 8 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.023511+0000 mon.a (mon.0) 327 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: cluster 2026-03-10T11:24:43.206038+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.661221+0000 mgr.y (mgr.14152) 69 : audit [DBG] from='client.24163 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.662585+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.664173+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:44 vm05 bash[22470]: audit 2026-03-10T11:24:43.664617+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.019649+0000 mon.a (mon.0) 324 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: cluster 2026-03-10T11:24:43.019840+0000 mon.a (mon.0) 325 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.019955+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.023077+0000 mon.c (mon.1) 8 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.023511+0000 mon.a (mon.0) 327 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: cluster 2026-03-10T11:24:43.206038+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.661221+0000 mgr.y (mgr.14152) 69 : audit [DBG] from='client.24163 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.662585+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.664173+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:44 vm05 bash[17453]: audit 2026-03-10T11:24:43.664617+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:44.348 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:24:44 vm05 bash[31446]: debug 2026-03-10T11:24:44.024+0000 7f776bcf8700 -1 osd.2 0 waiting for initial osdmap 2026-03-10T11:24:44.348 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:24:44 vm05 bash[31446]: debug 2026-03-10T11:24:44.032+0000 7f7765e8e700 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.019649+0000 mon.a (mon.0) 324 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: cluster 2026-03-10T11:24:43.019840+0000 mon.a (mon.0) 325 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.019955+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.023077+0000 mon.c (mon.1) 8 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.023511+0000 mon.a (mon.0) 327 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: cluster 2026-03-10T11:24:43.206038+0000 mgr.y (mgr.14152) 68 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 9.8 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.661221+0000 mgr.y (mgr.14152) 69 : audit [DBG] from='client.24163 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm05:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.662585+0000 mon.a (mon.0) 328 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.664173+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:24:44.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:44 vm07 bash[17804]: audit 2026-03-10T11:24:43.664617+0000 mon.a (mon.0) 330 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: cluster 2026-03-10T11:24:42.990298+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: cluster 2026-03-10T11:24:42.990392+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: audit 2026-03-10T11:24:44.026767+0000 mon.a (mon.0) 331 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: cluster 2026-03-10T11:24:44.026993+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: audit 2026-03-10T11:24:44.029204+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: audit 2026-03-10T11:24:44.031938+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.305 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:45 vm05 bash[22470]: audit 2026-03-10T11:24:45.030602+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: cluster 2026-03-10T11:24:42.990298+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: cluster 2026-03-10T11:24:42.990392+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: audit 2026-03-10T11:24:44.026767+0000 mon.a (mon.0) 331 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: cluster 2026-03-10T11:24:44.026993+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: audit 2026-03-10T11:24:44.029204+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: audit 2026-03-10T11:24:44.031938+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.306 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:45 vm05 bash[17453]: audit 2026-03-10T11:24:45.030602+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: cluster 2026-03-10T11:24:42.990298+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: cluster 2026-03-10T11:24:42.990392+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: audit 2026-03-10T11:24:44.026767+0000 mon.a (mon.0) 331 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: cluster 2026-03-10T11:24:44.026993+0000 mon.a (mon.0) 332 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: audit 2026-03-10T11:24:44.029204+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: audit 2026-03-10T11:24:44.031938+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:45.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:45 vm07 bash[17804]: audit 2026-03-10T11:24:45.030602+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:46 vm05 bash[17453]: cluster 2026-03-10T11:24:45.039266+0000 mon.a (mon.0) 336 : cluster [INF] osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061] boot 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:46 vm05 bash[17453]: cluster 2026-03-10T11:24:45.039396+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:46 vm05 bash[17453]: audit 2026-03-10T11:24:45.043952+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:46 vm05 bash[17453]: cluster 2026-03-10T11:24:45.206300+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 9.9 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:46 vm05 bash[17453]: audit 2026-03-10T11:24:45.245989+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:46 vm05 bash[22470]: cluster 2026-03-10T11:24:45.039266+0000 mon.a (mon.0) 336 : cluster [INF] osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061] boot 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:46 vm05 bash[22470]: cluster 2026-03-10T11:24:45.039396+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:46 vm05 bash[22470]: audit 2026-03-10T11:24:45.043952+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:46 vm05 bash[22470]: cluster 2026-03-10T11:24:45.206300+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 9.9 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:46.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:46 vm05 bash[22470]: audit 2026-03-10T11:24:45.245989+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-10T11:24:46.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:46 vm07 bash[17804]: cluster 2026-03-10T11:24:45.039266+0000 mon.a (mon.0) 336 : cluster [INF] osd.2 [v2:192.168.123.105:6818/420660061,v1:192.168.123.105:6819/420660061] boot 2026-03-10T11:24:46.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:46 vm07 bash[17804]: cluster 2026-03-10T11:24:45.039396+0000 mon.a (mon.0) 337 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-10T11:24:46.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:46 vm07 bash[17804]: audit 2026-03-10T11:24:45.043952+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:24:46.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:46 vm07 bash[17804]: cluster 2026-03-10T11:24:45.206300+0000 mgr.y (mgr.14152) 70 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 9.9 MiB used, 40 GiB / 40 GiB avail 2026-03-10T11:24:46.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:46 vm07 bash[17804]: audit 2026-03-10T11:24:45.245989+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]: dispatch 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:47 vm05 bash[17453]: audit 2026-03-10T11:24:46.058568+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:47 vm05 bash[17453]: cluster 2026-03-10T11:24:46.058905+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:47 vm05 bash[17453]: audit 2026-03-10T11:24:46.059762+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:47 vm05 bash[17453]: audit 2026-03-10T11:24:46.928737+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:47 vm05 bash[17453]: audit 2026-03-10T11:24:46.931376+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:47 vm05 bash[17453]: audit 2026-03-10T11:24:46.935695+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:47 vm05 bash[22470]: audit 2026-03-10T11:24:46.058568+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:47 vm05 bash[22470]: cluster 2026-03-10T11:24:46.058905+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:47 vm05 bash[22470]: audit 2026-03-10T11:24:46.059762+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:47 vm05 bash[22470]: audit 2026-03-10T11:24:46.928737+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:47 vm05 bash[22470]: audit 2026-03-10T11:24:46.931376+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:47.155 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:47 vm05 bash[22470]: audit 2026-03-10T11:24:46.935695+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:47 vm07 bash[17804]: audit 2026-03-10T11:24:46.058568+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32}]': finished 2026-03-10T11:24:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:47 vm07 bash[17804]: cluster 2026-03-10T11:24:46.058905+0000 mon.a (mon.0) 341 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-10T11:24:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:47 vm07 bash[17804]: audit 2026-03-10T11:24:46.059762+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-10T11:24:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:47 vm07 bash[17804]: audit 2026-03-10T11:24:46.928737+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:47 vm07 bash[17804]: audit 2026-03-10T11:24:46.931376+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:24:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:47 vm07 bash[17804]: audit 2026-03-10T11:24:46.935695+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: cephadm 2026-03-10T11:24:46.921393+0000 mgr.y (mgr.14152) 71 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:47.061376+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: cluster 2026-03-10T11:24:47.061631+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: cluster 2026-03-10T11:24:47.206573+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:47.823794+0000 mon.a (mon.0) 348 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]: dispatch 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:47.824762+0000 mon.b (mon.2) 8 : audit [INF] from='client.? 192.168.123.105:0/3743860140' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]: dispatch 2026-03-10T11:24:48.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:47.832850+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]': finished 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: cluster 2026-03-10T11:24:47.832962+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:47.833136+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:47.843652+0000 mon.a (mon.0) 352 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.003878+0000 mon.a (mon.0) 353 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.004364+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.004657+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.004737+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.006052+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.006098+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:48 vm05 bash[22470]: audit 2026-03-10T11:24:48.006133+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: cephadm 2026-03-10T11:24:46.921393+0000 mgr.y (mgr.14152) 71 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:47.061376+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: cluster 2026-03-10T11:24:47.061631+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: cluster 2026-03-10T11:24:47.206573+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:47.823794+0000 mon.a (mon.0) 348 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:47.824762+0000 mon.b (mon.2) 8 : audit [INF] from='client.? 192.168.123.105:0/3743860140' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:47.832850+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]': finished 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: cluster 2026-03-10T11:24:47.832962+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:47.833136+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:47.843652+0000 mon.a (mon.0) 352 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.003878+0000 mon.a (mon.0) 353 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.004364+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.004657+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.004737+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.006052+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.006098+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:48 vm05 bash[17453]: audit 2026-03-10T11:24:48.006133+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: cephadm 2026-03-10T11:24:46.921393+0000 mgr.y (mgr.14152) 71 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:47.061376+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: cluster 2026-03-10T11:24:47.061631+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: cluster 2026-03-10T11:24:47.206573+0000 mgr.y (mgr.14152) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 unknown; 0 B data, 15 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:47.823794+0000 mon.a (mon.0) 348 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:47.824762+0000 mon.b (mon.2) 8 : audit [INF] from='client.? 192.168.123.105:0/3743860140' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:47.832850+0000 mon.a (mon.0) 349 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0e62b553-78b1-4fbe-870e-d68c1967e6be"}]': finished 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: cluster 2026-03-10T11:24:47.832962+0000 mon.a (mon.0) 350 : cluster [DBG] osdmap e21: 4 total, 3 up, 4 in 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:47.833136+0000 mon.a (mon.0) 351 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:47.843652+0000 mon.a (mon.0) 352 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.003878+0000 mon.a (mon.0) 353 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.004364+0000 mon.a (mon.0) 354 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.004657+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.004737+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.006052+0000 mon.a (mon.0) 357 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.006098+0000 mon.a (mon.0) 358 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:48.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:48 vm07 bash[17804]: audit 2026-03-10T11:24:48.006133+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:49.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.005995+0000 mon.c (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:49.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.158390+0000 mon.c (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:49.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.162153+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:49.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.162830+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.162895+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.162929+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.307813+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:49 vm05 bash[17453]: audit 2026-03-10T11:24:48.498852+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/2716261551' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.005995+0000 mon.c (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.158390+0000 mon.c (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.162153+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.162830+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.162895+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.162929+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.307813+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:49 vm05 bash[22470]: audit 2026-03-10T11:24:48.498852+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/2716261551' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.005995+0000 mon.c (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.158390+0000 mon.c (mon.1) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.162153+0000 mon.b (mon.2) 9 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.162830+0000 mon.a (mon.0) 360 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.162895+0000 mon.a (mon.0) 361 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.162929+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.307813+0000 mon.b (mon.2) 10 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-10T11:24:49.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:49 vm07 bash[17804]: audit 2026-03-10T11:24:48.498852+0000 mon.c (mon.1) 11 : audit [DBG] from='client.? 192.168.123.105:0/2716261551' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:24:50.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:50 vm05 bash[17453]: cluster 2026-03-10T11:24:49.206855+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 16 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:50.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:50 vm05 bash[17453]: cluster 2026-03-10T11:24:49.335951+0000 mon.a (mon.0) 363 : cluster [DBG] mgrmap e15: y(active, since 78s), standbys: x 2026-03-10T11:24:50.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:50 vm05 bash[22470]: cluster 2026-03-10T11:24:49.206855+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 16 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:50.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:50 vm05 bash[22470]: cluster 2026-03-10T11:24:49.335951+0000 mon.a (mon.0) 363 : cluster [DBG] mgrmap e15: y(active, since 78s), standbys: x 2026-03-10T11:24:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:50 vm07 bash[17804]: cluster 2026-03-10T11:24:49.206855+0000 mgr.y (mgr.14152) 73 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 16 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:50 vm07 bash[17804]: cluster 2026-03-10T11:24:49.335951+0000 mon.a (mon.0) 363 : cluster [DBG] mgrmap e15: y(active, since 78s), standbys: x 2026-03-10T11:24:52.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:52 vm05 bash[22470]: cluster 2026-03-10T11:24:51.207127+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:52.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:52 vm05 bash[17453]: cluster 2026-03-10T11:24:51.207127+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:52.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:52 vm07 bash[17804]: cluster 2026-03-10T11:24:51.207127+0000 mgr.y (mgr.14152) 74 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:54.498 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:54 vm05 bash[17453]: cluster 2026-03-10T11:24:53.207439+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:54.498 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:54 vm05 bash[17453]: audit 2026-03-10T11:24:54.027138+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:24:54.498 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:54 vm05 bash[17453]: audit 2026-03-10T11:24:54.027704+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:54.498 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:54 vm05 bash[22470]: cluster 2026-03-10T11:24:53.207439+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:54.498 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:54 vm05 bash[22470]: audit 2026-03-10T11:24:54.027138+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:24:54.498 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:54 vm05 bash[22470]: audit 2026-03-10T11:24:54.027704+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:54.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:54 vm07 bash[17804]: cluster 2026-03-10T11:24:53.207439+0000 mgr.y (mgr.14152) 75 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:54.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:54 vm07 bash[17804]: audit 2026-03-10T11:24:54.027138+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:24:54.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:54 vm07 bash[17804]: audit 2026-03-10T11:24:54.027704+0000 mon.a (mon.0) 365 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:54.842 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:54.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:54.842 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:54.842 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:54.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:54.842 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.098 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.098 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.098 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:24:54 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:55 vm05 bash[17453]: cephadm 2026-03-10T11:24:54.028114+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:55 vm05 bash[17453]: audit 2026-03-10T11:24:54.945608+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:55 vm05 bash[17453]: audit 2026-03-10T11:24:54.965844+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:55 vm05 bash[17453]: audit 2026-03-10T11:24:54.966547+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:55 vm05 bash[17453]: audit 2026-03-10T11:24:54.966996+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:55 vm05 bash[22470]: cephadm 2026-03-10T11:24:54.028114+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:55 vm05 bash[22470]: audit 2026-03-10T11:24:54.945608+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:55 vm05 bash[22470]: audit 2026-03-10T11:24:54.965844+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:55 vm05 bash[22470]: audit 2026-03-10T11:24:54.966547+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:55.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:55 vm05 bash[22470]: audit 2026-03-10T11:24:54.966996+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:55 vm07 bash[17804]: cephadm 2026-03-10T11:24:54.028114+0000 mgr.y (mgr.14152) 76 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:24:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:55 vm07 bash[17804]: audit 2026-03-10T11:24:54.945608+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:55 vm07 bash[17804]: audit 2026-03-10T11:24:54.965844+0000 mon.a (mon.0) 367 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:55 vm07 bash[17804]: audit 2026-03-10T11:24:54.966547+0000 mon.a (mon.0) 368 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:55 vm07 bash[17804]: audit 2026-03-10T11:24:54.966996+0000 mon.a (mon.0) 369 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:56.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:56 vm05 bash[22470]: cluster 2026-03-10T11:24:55.207846+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:56.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:56 vm05 bash[17453]: cluster 2026-03-10T11:24:55.207846+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:56.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:56 vm07 bash[17804]: cluster 2026-03-10T11:24:55.207846+0000 mgr.y (mgr.14152) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:58.366 INFO:teuthology.orchestra.run.vm05.stdout:Created osd(s) 3 on host 'vm05' 2026-03-10T11:24:58.434 DEBUG:teuthology.orchestra.run.vm05:osd.3> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.3.service 2026-03-10T11:24:58.435 INFO:tasks.cephadm:Deploying osd.4 on vm07 with /dev/vde... 2026-03-10T11:24:58.435 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vde 2026-03-10T11:24:58.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:58 vm05 bash[17453]: cluster 2026-03-10T11:24:57.208132+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:58 vm05 bash[17453]: audit 2026-03-10T11:24:57.959315+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:58 vm05 bash[17453]: audit 2026-03-10T11:24:57.963693+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:58 vm05 bash[17453]: audit 2026-03-10T11:24:58.164242+0000 mon.c (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:58 vm05 bash[17453]: audit 2026-03-10T11:24:58.164545+0000 mon.a (mon.0) 372 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:58 vm05 bash[22470]: cluster 2026-03-10T11:24:57.208132+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:58 vm05 bash[22470]: audit 2026-03-10T11:24:57.959315+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:58 vm05 bash[22470]: audit 2026-03-10T11:24:57.963693+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:58 vm05 bash[22470]: audit 2026-03-10T11:24:58.164242+0000 mon.c (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:24:58.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:58 vm05 bash[22470]: audit 2026-03-10T11:24:58.164545+0000 mon.a (mon.0) 372 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:24:58.603 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:58 vm07 bash[17804]: cluster 2026-03-10T11:24:57.208132+0000 mgr.y (mgr.14152) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:24:58.603 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:58 vm07 bash[17804]: audit 2026-03-10T11:24:57.959315+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:58.603 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:58 vm07 bash[17804]: audit 2026-03-10T11:24:57.963693+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:58.603 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:58 vm07 bash[17804]: audit 2026-03-10T11:24:58.164242+0000 mon.c (mon.1) 12 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:24:58.603 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:58 vm07 bash[17804]: audit 2026-03-10T11:24:58.164545+0000 mon.a (mon.0) 372 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:24:59.048 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:24:59.060 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm07:/dev/vde 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.362085+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.379504+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.380289+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.380825+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.969737+0000 mon.a (mon.0) 377 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: cluster 2026-03-10T11:24:58.969827+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.970432+0000 mon.c (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.970744+0000 mon.a (mon.0) 379 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:59.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:24:59 vm07 bash[17804]: audit 2026-03-10T11:24:58.971064+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:24:59.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.362085+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.379504+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.380289+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.380825+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.969737+0000 mon.a (mon.0) 377 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: cluster 2026-03-10T11:24:58.969827+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.970432+0000 mon.c (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.970744+0000 mon.a (mon.0) 379 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:24:59 vm05 bash[22470]: audit 2026-03-10T11:24:58.971064+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.362085+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.379504+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.380289+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.380825+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.969737+0000 mon.a (mon.0) 377 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: cluster 2026-03-10T11:24:58.969827+0000 mon.a (mon.0) 378 : cluster [DBG] osdmap e22: 4 total, 3 up, 4 in 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.970432+0000 mon.c (mon.1) 13 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.970744+0000 mon.a (mon.0) 379 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:24:59.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:24:59 vm05 bash[17453]: audit 2026-03-10T11:24:58.971064+0000 mon.a (mon.0) 380 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:00.348 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:24:59 vm05 bash[34644]: debug 2026-03-10T11:24:59.976+0000 7fbfdfe61700 -1 osd.3 0 waiting for initial osdmap 2026-03-10T11:25:00.348 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:24:59 vm05 bash[34644]: debug 2026-03-10T11:24:59.984+0000 7fbfd8ff5700 -1 osd.3 23 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: cluster 2026-03-10T11:24:59.208491+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: audit 2026-03-10T11:24:59.463279+0000 mgr.y (mgr.14152) 80 : audit [DBG] from='client.24173 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: audit 2026-03-10T11:24:59.464664+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: audit 2026-03-10T11:24:59.465981+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: audit 2026-03-10T11:24:59.466389+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: audit 2026-03-10T11:24:59.971822+0000 mon.a (mon.0) 384 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: cluster 2026-03-10T11:24:59.971951+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T11:25:00.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:00 vm07 bash[17804]: audit 2026-03-10T11:24:59.977932+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: cluster 2026-03-10T11:24:59.208491+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: audit 2026-03-10T11:24:59.463279+0000 mgr.y (mgr.14152) 80 : audit [DBG] from='client.24173 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: audit 2026-03-10T11:24:59.464664+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: audit 2026-03-10T11:24:59.465981+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: audit 2026-03-10T11:24:59.466389+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: audit 2026-03-10T11:24:59.971822+0000 mon.a (mon.0) 384 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:25:00.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: cluster 2026-03-10T11:24:59.971951+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:00 vm05 bash[22470]: audit 2026-03-10T11:24:59.977932+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: cluster 2026-03-10T11:24:59.208491+0000 mgr.y (mgr.14152) 79 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 17 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: audit 2026-03-10T11:24:59.463279+0000 mgr.y (mgr.14152) 80 : audit [DBG] from='client.24173 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: audit 2026-03-10T11:24:59.464664+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: audit 2026-03-10T11:24:59.465981+0000 mon.a (mon.0) 382 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: audit 2026-03-10T11:24:59.466389+0000 mon.a (mon.0) 383 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: audit 2026-03-10T11:24:59.971822+0000 mon.a (mon.0) 384 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]': finished 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: cluster 2026-03-10T11:24:59.971951+0000 mon.a (mon.0) 385 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-10T11:25:00.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:00 vm05 bash[17453]: audit 2026-03-10T11:24:59.977932+0000 mon.a (mon.0) 386 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:01.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:01 vm07 bash[17804]: cluster 2026-03-10T11:24:59.190144+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:01.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:01 vm07 bash[17804]: cluster 2026-03-10T11:24:59.190254+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:01.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:01 vm07 bash[17804]: audit 2026-03-10T11:25:00.977436+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:01.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:01 vm07 bash[17804]: cluster 2026-03-10T11:25:00.986365+0000 mon.a (mon.0) 388 : cluster [INF] osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923] boot 2026-03-10T11:25:01.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:01 vm07 bash[17804]: cluster 2026-03-10T11:25:00.986401+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e24: 4 total, 4 up, 4 in 2026-03-10T11:25:01.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:01 vm07 bash[17804]: audit 2026-03-10T11:25:00.986479+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:01.819 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:01 vm05 bash[22470]: cluster 2026-03-10T11:24:59.190144+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:01 vm05 bash[22470]: cluster 2026-03-10T11:24:59.190254+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:01 vm05 bash[22470]: audit 2026-03-10T11:25:00.977436+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:01 vm05 bash[22470]: cluster 2026-03-10T11:25:00.986365+0000 mon.a (mon.0) 388 : cluster [INF] osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923] boot 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:01 vm05 bash[22470]: cluster 2026-03-10T11:25:00.986401+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e24: 4 total, 4 up, 4 in 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:01 vm05 bash[22470]: audit 2026-03-10T11:25:00.986479+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:01 vm05 bash[17453]: cluster 2026-03-10T11:24:59.190144+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:01 vm05 bash[17453]: cluster 2026-03-10T11:24:59.190254+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:01 vm05 bash[17453]: audit 2026-03-10T11:25:00.977436+0000 mon.a (mon.0) 387 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:01 vm05 bash[17453]: cluster 2026-03-10T11:25:00.986365+0000 mon.a (mon.0) 388 : cluster [INF] osd.3 [v2:192.168.123.105:6826/311748923,v1:192.168.123.105:6827/311748923] boot 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:01 vm05 bash[17453]: cluster 2026-03-10T11:25:00.986401+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e24: 4 total, 4 up, 4 in 2026-03-10T11:25:01.820 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:01 vm05 bash[17453]: audit 2026-03-10T11:25:00.986479+0000 mon.a (mon.0) 390 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:25:02.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:02 vm07 bash[17804]: cluster 2026-03-10T11:25:01.208776+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:25:02.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:02 vm07 bash[17804]: cluster 2026-03-10T11:25:01.992933+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-10T11:25:02.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:02 vm05 bash[22470]: cluster 2026-03-10T11:25:01.208776+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:25:02.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:02 vm05 bash[22470]: cluster 2026-03-10T11:25:01.992933+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-10T11:25:02.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:02 vm05 bash[17453]: cluster 2026-03-10T11:25:01.208776+0000 mgr.y (mgr.14152) 81 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 18 MiB used, 60 GiB / 60 GiB avail 2026-03-10T11:25:02.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:02 vm05 bash[17453]: cluster 2026-03-10T11:25:01.992933+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e25: 4 total, 4 up, 4 in 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.682783+0000 mon.a (mon.0) 392 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]: dispatch 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.683772+0000 mon.b (mon.2) 11 : audit [INF] from='client.? 192.168.123.107:0/2130771552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]: dispatch 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.692015+0000 mon.a (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]': finished 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: cluster 2026-03-10T11:25:02.692070+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e26: 5 total, 4 up, 5 in 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.692185+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.939908+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.941420+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:02.946038+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:03.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:03 vm07 bash[17804]: audit 2026-03-10T11:25:03.338047+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/2126359001' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.682783+0000 mon.a (mon.0) 392 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.683772+0000 mon.b (mon.2) 11 : audit [INF] from='client.? 192.168.123.107:0/2130771552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.692015+0000 mon.a (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]': finished 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: cluster 2026-03-10T11:25:02.692070+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e26: 5 total, 4 up, 5 in 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.692185+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.939908+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.941420+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:02.946038+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:03 vm05 bash[17453]: audit 2026-03-10T11:25:03.338047+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/2126359001' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.682783+0000 mon.a (mon.0) 392 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.683772+0000 mon.b (mon.2) 11 : audit [INF] from='client.? 192.168.123.107:0/2130771552' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.692015+0000 mon.a (mon.0) 393 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5d2d7aab-4d36-465e-b574-aaa4de107693"}]': finished 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: cluster 2026-03-10T11:25:02.692070+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e26: 5 total, 4 up, 5 in 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.692185+0000 mon.a (mon.0) 395 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.939908+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.941420+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:02.946038+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:03.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:03 vm05 bash[22470]: audit 2026-03-10T11:25:03.338047+0000 mon.b (mon.2) 12 : audit [DBG] from='client.? 192.168.123.107:0/2126359001' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:04.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:04 vm07 bash[17804]: cephadm 2026-03-10T11:25:02.932829+0000 mgr.y (mgr.14152) 82 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:25:04.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:04 vm07 bash[17804]: cluster 2026-03-10T11:25:03.209041+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:04.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:04 vm05 bash[17453]: cephadm 2026-03-10T11:25:02.932829+0000 mgr.y (mgr.14152) 82 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:25:04.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:04 vm05 bash[17453]: cluster 2026-03-10T11:25:03.209041+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:04.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:04 vm05 bash[22470]: cephadm 2026-03-10T11:25:02.932829+0000 mgr.y (mgr.14152) 82 : cephadm [INF] Detected new or changed devices on vm05 2026-03-10T11:25:04.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:04 vm05 bash[22470]: cluster 2026-03-10T11:25:03.209041+0000 mgr.y (mgr.14152) 83 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:06.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:06 vm07 bash[17804]: cluster 2026-03-10T11:25:05.209325+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:06 vm05 bash[22470]: cluster 2026-03-10T11:25:05.209325+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:06.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:06 vm05 bash[17453]: cluster 2026-03-10T11:25:05.209325+0000 mgr.y (mgr.14152) 84 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:08.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:08 vm07 bash[17804]: cluster 2026-03-10T11:25:07.209536+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:08 vm05 bash[22470]: cluster 2026-03-10T11:25:07.209536+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:08.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:08 vm05 bash[17453]: cluster 2026-03-10T11:25:07.209536+0000 mgr.y (mgr.14152) 85 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:09.540 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:09 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:09.541 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:09 vm07 bash[17804]: audit 2026-03-10T11:25:08.764880+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:25:09.541 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:09 vm07 bash[17804]: audit 2026-03-10T11:25:08.765430+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:09.541 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:09 vm07 bash[17804]: cephadm 2026-03-10T11:25:08.765830+0000 mgr.y (mgr.14152) 86 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:25:09.541 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:09 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:09.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:09 vm05 bash[22470]: audit 2026-03-10T11:25:08.764880+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:25:09.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:09 vm05 bash[22470]: audit 2026-03-10T11:25:08.765430+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:09.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:09 vm05 bash[22470]: cephadm 2026-03-10T11:25:08.765830+0000 mgr.y (mgr.14152) 86 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:25:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:09 vm05 bash[17453]: audit 2026-03-10T11:25:08.764880+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:25:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:09 vm05 bash[17453]: audit 2026-03-10T11:25:08.765430+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:09 vm05 bash[17453]: cephadm 2026-03-10T11:25:08.765830+0000 mgr.y (mgr.14152) 86 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:25:09.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:09 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:09.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:09 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:10 vm07 bash[17804]: cluster 2026-03-10T11:25:09.209825+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:10 vm07 bash[17804]: audit 2026-03-10T11:25:09.624183+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:10 vm07 bash[17804]: audit 2026-03-10T11:25:09.625788+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:10 vm07 bash[17804]: audit 2026-03-10T11:25:09.626334+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:10 vm07 bash[17804]: audit 2026-03-10T11:25:09.635252+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:10.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:10 vm05 bash[22470]: cluster 2026-03-10T11:25:09.209825+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:10.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:10 vm05 bash[22470]: audit 2026-03-10T11:25:09.624183+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:10.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:10 vm05 bash[22470]: audit 2026-03-10T11:25:09.625788+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:10.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:10 vm05 bash[22470]: audit 2026-03-10T11:25:09.626334+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:10.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:10 vm05 bash[22470]: audit 2026-03-10T11:25:09.635252+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:10 vm05 bash[17453]: cluster 2026-03-10T11:25:09.209825+0000 mgr.y (mgr.14152) 87 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:10 vm05 bash[17453]: audit 2026-03-10T11:25:09.624183+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:10 vm05 bash[17453]: audit 2026-03-10T11:25:09.625788+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:10 vm05 bash[17453]: audit 2026-03-10T11:25:09.626334+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:10 vm05 bash[17453]: audit 2026-03-10T11:25:09.635252+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:12.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:12 vm07 bash[17804]: cluster 2026-03-10T11:25:11.210103+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:12.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:12 vm05 bash[22470]: cluster 2026-03-10T11:25:11.210103+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:12 vm05 bash[17453]: cluster 2026-03-10T11:25:11.210103+0000 mgr.y (mgr.14152) 88 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:12.978 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 4 on host 'vm07' 2026-03-10T11:25:13.040 DEBUG:teuthology.orchestra.run.vm07:osd.4> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.4.service 2026-03-10T11:25:13.041 INFO:tasks.cephadm:Deploying osd.5 on vm07 with /dev/vdd... 2026-03-10T11:25:13.041 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vdd 2026-03-10T11:25:13.703 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:25:13.709 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm07:/dev/vdd 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.571643+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.576253+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.799202+0000 mon.a (mon.0) 407 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.800453+0000 mon.b (mon.2) 13 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.972129+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.991446+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.992193+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:13.846 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:13 vm07 bash[17804]: audit 2026-03-10T11:25:12.992639+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:13.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.571643+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.576253+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.799202+0000 mon.a (mon.0) 407 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.800453+0000 mon.b (mon.2) 13 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.972129+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.991446+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.992193+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:13 vm05 bash[22470]: audit 2026-03-10T11:25:12.992639+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.571643+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.576253+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.799202+0000 mon.a (mon.0) 407 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.800453+0000 mon.b (mon.2) 13 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.972129+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.991446+0000 mon.a (mon.0) 409 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.992193+0000 mon.a (mon.0) 410 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:13.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:13 vm05 bash[17453]: audit 2026-03-10T11:25:12.992639+0000 mon.a (mon.0) 411 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: cluster 2026-03-10T11:25:13.210381+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:13.587284+0000 mon.a (mon.0) 412 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: cluster 2026-03-10T11:25:13.587385+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:13.587633+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:13.588471+0000 mon.a (mon.0) 415 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:13.589685+0000 mon.b (mon.2) 14 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:14.137283+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:14.138924+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:14 vm07 bash[17804]: audit 2026-03-10T11:25:14.139444+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:14.950 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:14 vm07 bash[20845]: debug 2026-03-10T11:25:14.590+0000 7fcb6a7b6700 -1 osd.4 0 waiting for initial osdmap 2026-03-10T11:25:14.950 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:14 vm07 bash[20845]: debug 2026-03-10T11:25:14.598+0000 7fcb63149700 -1 osd.4 28 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:25:15.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: cluster 2026-03-10T11:25:13.210381+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:13.587284+0000 mon.a (mon.0) 412 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: cluster 2026-03-10T11:25:13.587385+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:13.587633+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:13.588471+0000 mon.a (mon.0) 415 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:13.589685+0000 mon.b (mon.2) 14 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:14.137283+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:14.138924+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:14 vm05 bash[22470]: audit 2026-03-10T11:25:14.139444+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: cluster 2026-03-10T11:25:13.210381+0000 mgr.y (mgr.14152) 89 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:13.587284+0000 mon.a (mon.0) 412 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: cluster 2026-03-10T11:25:13.587385+0000 mon.a (mon.0) 413 : cluster [DBG] osdmap e27: 5 total, 4 up, 5 in 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:13.587633+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:13.588471+0000 mon.a (mon.0) 415 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:13.589685+0000 mon.b (mon.2) 14 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:14.137283+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:14.138924+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:15.105 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:14 vm05 bash[17453]: audit 2026-03-10T11:25:14.139444+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: audit 2026-03-10T11:25:14.135883+0000 mgr.y (mgr.14152) 90 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: audit 2026-03-10T11:25:14.588833+0000 mon.a (mon.0) 419 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: cluster 2026-03-10T11:25:14.588941+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: audit 2026-03-10T11:25:14.591941+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: audit 2026-03-10T11:25:14.603696+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: cluster 2026-03-10T11:25:15.592189+0000 mon.a (mon.0) 423 : cluster [INF] osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665] boot 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: cluster 2026-03-10T11:25:15.592894+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e29: 5 total, 5 up, 5 in 2026-03-10T11:25:15.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:15 vm07 bash[17804]: audit 2026-03-10T11:25:15.594400+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: audit 2026-03-10T11:25:14.135883+0000 mgr.y (mgr.14152) 90 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: audit 2026-03-10T11:25:14.588833+0000 mon.a (mon.0) 419 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: cluster 2026-03-10T11:25:14.588941+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: audit 2026-03-10T11:25:14.591941+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: audit 2026-03-10T11:25:14.603696+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: cluster 2026-03-10T11:25:15.592189+0000 mon.a (mon.0) 423 : cluster [INF] osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665] boot 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: cluster 2026-03-10T11:25:15.592894+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e29: 5 total, 5 up, 5 in 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:15 vm05 bash[22470]: audit 2026-03-10T11:25:15.594400+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: audit 2026-03-10T11:25:14.135883+0000 mgr.y (mgr.14152) 90 : audit [DBG] from='client.24200 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: audit 2026-03-10T11:25:14.588833+0000 mon.a (mon.0) 419 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: cluster 2026-03-10T11:25:14.588941+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: audit 2026-03-10T11:25:14.591941+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: audit 2026-03-10T11:25:14.603696+0000 mon.a (mon.0) 422 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: cluster 2026-03-10T11:25:15.592189+0000 mon.a (mon.0) 423 : cluster [INF] osd.4 [v2:192.168.123.107:6800/774944665,v1:192.168.123.107:6801/774944665] boot 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: cluster 2026-03-10T11:25:15.592894+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e29: 5 total, 5 up, 5 in 2026-03-10T11:25:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:15 vm05 bash[17453]: audit 2026-03-10T11:25:15.594400+0000 mon.a (mon.0) 425 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:25:16.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:16 vm07 bash[17804]: cluster 2026-03-10T11:25:13.846581+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:16.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:16 vm07 bash[17804]: cluster 2026-03-10T11:25:13.846669+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:16.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:16 vm07 bash[17804]: cluster 2026-03-10T11:25:15.210686+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:16.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:16 vm07 bash[17804]: cluster 2026-03-10T11:25:16.595320+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-10T11:25:17.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:16 vm05 bash[22470]: cluster 2026-03-10T11:25:13.846581+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:16 vm05 bash[22470]: cluster 2026-03-10T11:25:13.846669+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:16 vm05 bash[22470]: cluster 2026-03-10T11:25:15.210686+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:16 vm05 bash[22470]: cluster 2026-03-10T11:25:16.595320+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:16 vm05 bash[17453]: cluster 2026-03-10T11:25:13.846581+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:16 vm05 bash[17453]: cluster 2026-03-10T11:25:13.846669+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:16 vm05 bash[17453]: cluster 2026-03-10T11:25:15.210686+0000 mgr.y (mgr.14152) 91 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 23 MiB used, 80 GiB / 80 GiB avail 2026-03-10T11:25:17.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:16 vm05 bash[17453]: cluster 2026-03-10T11:25:16.595320+0000 mon.a (mon.0) 426 : cluster [DBG] osdmap e30: 5 total, 5 up, 5 in 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: cluster 2026-03-10T11:25:17.210939+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: cephadm 2026-03-10T11:25:17.231164+0000 mgr.y (mgr.14152) 93 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:17.236910+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:17.238306+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: cephadm 2026-03-10T11:25:17.238691+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: cephadm 2026-03-10T11:25:17.239044+0000 mgr.y (mgr.14152) 95 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:17.243197+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: cluster 2026-03-10T11:25:17.597497+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:18.227537+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:18.228644+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/3897460211' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:18.233871+0000 mon.a (mon.0) 432 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]': finished 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: cluster 2026-03-10T11:25:18.233900+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e32: 6 total, 5 up, 6 in 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:18 vm05 bash[22470]: audit 2026-03-10T11:25:18.233948+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: cluster 2026-03-10T11:25:17.210939+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: cephadm 2026-03-10T11:25:17.231164+0000 mgr.y (mgr.14152) 93 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:17.236910+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:17.238306+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: cephadm 2026-03-10T11:25:17.238691+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: cephadm 2026-03-10T11:25:17.239044+0000 mgr.y (mgr.14152) 95 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:17.243197+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: cluster 2026-03-10T11:25:17.597497+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:18.227537+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:18.228644+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/3897460211' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]: dispatch 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:18.233871+0000 mon.a (mon.0) 432 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]': finished 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: cluster 2026-03-10T11:25:18.233900+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e32: 6 total, 5 up, 6 in 2026-03-10T11:25:18.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:18 vm05 bash[17453]: audit 2026-03-10T11:25:18.233948+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: cluster 2026-03-10T11:25:17.210939+0000 mgr.y (mgr.14152) 92 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: cephadm 2026-03-10T11:25:17.231164+0000 mgr.y (mgr.14152) 93 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:17.236910+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:17.238306+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: cephadm 2026-03-10T11:25:17.238691+0000 mgr.y (mgr.14152) 94 : cephadm [INF] Adjusting osd_memory_target on vm07 to 455.7M 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: cephadm 2026-03-10T11:25:17.239044+0000 mgr.y (mgr.14152) 95 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:17.243197+0000 mon.a (mon.0) 429 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: cluster 2026-03-10T11:25:17.597497+0000 mon.a (mon.0) 430 : cluster [DBG] osdmap e31: 5 total, 5 up, 5 in 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:18.227537+0000 mon.a (mon.0) 431 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]: dispatch 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:18.228644+0000 mon.b (mon.2) 15 : audit [INF] from='client.? 192.168.123.107:0/3897460211' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]: dispatch 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:18.233871+0000 mon.a (mon.0) 432 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dcefdca8-8af9-4aeb-9472-1fb1d076fa1e"}]': finished 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: cluster 2026-03-10T11:25:18.233900+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e32: 6 total, 5 up, 6 in 2026-03-10T11:25:18.646 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:18 vm07 bash[17804]: audit 2026-03-10T11:25:18.233948+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:19.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:19 vm05 bash[22470]: audit 2026-03-10T11:25:18.853000+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.107:0/567323359' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:19.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:19 vm05 bash[17453]: audit 2026-03-10T11:25:18.853000+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.107:0/567323359' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:19.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:19 vm07 bash[17804]: audit 2026-03-10T11:25:18.853000+0000 mon.b (mon.2) 16 : audit [DBG] from='client.? 192.168.123.107:0/567323359' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:20.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:20 vm05 bash[22470]: cluster 2026-03-10T11:25:19.211225+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail; 97 KiB/s, 0 objects/s recovering 2026-03-10T11:25:20.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:20 vm05 bash[17453]: cluster 2026-03-10T11:25:19.211225+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail; 97 KiB/s, 0 objects/s recovering 2026-03-10T11:25:20.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:20 vm07 bash[17804]: cluster 2026-03-10T11:25:19.211225+0000 mgr.y (mgr.14152) 96 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 28 MiB used, 100 GiB / 100 GiB avail; 97 KiB/s, 0 objects/s recovering 2026-03-10T11:25:22.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:22 vm05 bash[22470]: cluster 2026-03-10T11:25:21.211486+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-10T11:25:22.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:22 vm05 bash[17453]: cluster 2026-03-10T11:25:21.211486+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-10T11:25:22.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:22 vm07 bash[17804]: cluster 2026-03-10T11:25:21.211486+0000 mgr.y (mgr.14152) 97 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-10T11:25:24.546 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:24 vm07 bash[17804]: cluster 2026-03-10T11:25:23.211805+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 59 KiB/s, 0 objects/s recovering 2026-03-10T11:25:24.546 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:24 vm07 bash[17804]: audit 2026-03-10T11:25:24.285260+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:25:24.546 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:24 vm07 bash[17804]: audit 2026-03-10T11:25:24.285877+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:24.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:24 vm05 bash[22470]: cluster 2026-03-10T11:25:23.211805+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 59 KiB/s, 0 objects/s recovering 2026-03-10T11:25:24.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:24 vm05 bash[22470]: audit 2026-03-10T11:25:24.285260+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:25:24.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:24 vm05 bash[22470]: audit 2026-03-10T11:25:24.285877+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:24.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:24 vm05 bash[17453]: cluster 2026-03-10T11:25:23.211805+0000 mgr.y (mgr.14152) 98 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 59 KiB/s, 0 objects/s recovering 2026-03-10T11:25:24.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:24 vm05 bash[17453]: audit 2026-03-10T11:25:24.285260+0000 mon.a (mon.0) 435 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:25:24.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:24 vm05 bash[17453]: audit 2026-03-10T11:25:24.285877+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:25.140 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:24 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:25.140 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:25 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:25.140 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:24 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:25.140 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:25 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:25.140 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:24 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:25.140 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:25 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:25 vm07 bash[17804]: cephadm 2026-03-10T11:25:24.286300+0000 mgr.y (mgr.14152) 99 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:25:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:25 vm07 bash[17804]: audit 2026-03-10T11:25:25.145634+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:25 vm07 bash[17804]: audit 2026-03-10T11:25:25.172329+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:25 vm07 bash[17804]: audit 2026-03-10T11:25:25.172994+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:25 vm07 bash[17804]: audit 2026-03-10T11:25:25.173371+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:25.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:25 vm05 bash[22470]: cephadm 2026-03-10T11:25:24.286300+0000 mgr.y (mgr.14152) 99 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:25:25.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:25 vm05 bash[22470]: audit 2026-03-10T11:25:25.145634+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:25.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:25 vm05 bash[22470]: audit 2026-03-10T11:25:25.172329+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:25.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:25 vm05 bash[22470]: audit 2026-03-10T11:25:25.172994+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:25.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:25 vm05 bash[22470]: audit 2026-03-10T11:25:25.173371+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:25.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:25 vm05 bash[17453]: cephadm 2026-03-10T11:25:24.286300+0000 mgr.y (mgr.14152) 99 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:25:25.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:25 vm05 bash[17453]: audit 2026-03-10T11:25:25.145634+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:25.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:25 vm05 bash[17453]: audit 2026-03-10T11:25:25.172329+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:25.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:25 vm05 bash[17453]: audit 2026-03-10T11:25:25.172994+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:25.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:25 vm05 bash[17453]: audit 2026-03-10T11:25:25.173371+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:26.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:26 vm05 bash[22470]: cluster 2026-03-10T11:25:25.212079+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 49 KiB/s, 0 objects/s recovering 2026-03-10T11:25:26.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:26 vm05 bash[17453]: cluster 2026-03-10T11:25:25.212079+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 49 KiB/s, 0 objects/s recovering 2026-03-10T11:25:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:26 vm07 bash[17804]: cluster 2026-03-10T11:25:25.212079+0000 mgr.y (mgr.14152) 100 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 49 KiB/s, 0 objects/s recovering 2026-03-10T11:25:28.448 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 5 on host 'vm07' 2026-03-10T11:25:28.511 DEBUG:teuthology.orchestra.run.vm07:osd.5> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.5.service 2026-03-10T11:25:28.511 INFO:tasks.cephadm:Deploying osd.6 on vm07 with /dev/vdc... 2026-03-10T11:25:28.511 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vdc 2026-03-10T11:25:28.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:28 vm05 bash[22470]: cluster 2026-03-10T11:25:27.212283+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T11:25:28.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:28 vm05 bash[22470]: audit 2026-03-10T11:25:28.077298+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:28.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:28 vm05 bash[22470]: audit 2026-03-10T11:25:28.081944+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:28.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:28 vm05 bash[17453]: cluster 2026-03-10T11:25:27.212283+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T11:25:28.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:28 vm05 bash[17453]: audit 2026-03-10T11:25:28.077298+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:28.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:28 vm05 bash[17453]: audit 2026-03-10T11:25:28.081944+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:28.682 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:28 vm07 bash[17804]: cluster 2026-03-10T11:25:27.212283+0000 mgr.y (mgr.14152) 101 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 40 KiB/s, 0 objects/s recovering 2026-03-10T11:25:28.682 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:28 vm07 bash[17804]: audit 2026-03-10T11:25:28.077298+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:28.682 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:28 vm07 bash[17804]: audit 2026-03-10T11:25:28.081944+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:29.136 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:25:29.149 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm07:/dev/vdc 2026-03-10T11:25:29.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:29 vm07 bash[17804]: audit 2026-03-10T11:25:28.349077+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:25:29.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:29 vm07 bash[17804]: audit 2026-03-10T11:25:28.350278+0000 mon.b (mon.2) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:25:29.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:29 vm07 bash[17804]: audit 2026-03-10T11:25:28.443206+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:29.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:29 vm07 bash[17804]: audit 2026-03-10T11:25:28.447357+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:29.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:29 vm07 bash[17804]: audit 2026-03-10T11:25:28.448365+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:29.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:29 vm07 bash[17804]: audit 2026-03-10T11:25:28.448881+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:29.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:29 vm05 bash[22470]: audit 2026-03-10T11:25:28.349077+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:25:29.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:29 vm05 bash[22470]: audit 2026-03-10T11:25:28.350278+0000 mon.b (mon.2) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:25:29.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:29 vm05 bash[22470]: audit 2026-03-10T11:25:28.443206+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:29.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:29 vm05 bash[22470]: audit 2026-03-10T11:25:28.447357+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:29.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:29 vm05 bash[22470]: audit 2026-03-10T11:25:28.448365+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:29.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:29 vm05 bash[22470]: audit 2026-03-10T11:25:28.448881+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:29 vm05 bash[17453]: audit 2026-03-10T11:25:28.349077+0000 mon.a (mon.0) 443 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:25:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:29 vm05 bash[17453]: audit 2026-03-10T11:25:28.350278+0000 mon.b (mon.2) 17 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:25:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:29 vm05 bash[17453]: audit 2026-03-10T11:25:28.443206+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:29 vm05 bash[17453]: audit 2026-03-10T11:25:28.447357+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:29 vm05 bash[17453]: audit 2026-03-10T11:25:28.448365+0000 mon.a (mon.0) 446 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:29 vm05 bash[17453]: audit 2026-03-10T11:25:28.448881+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: cluster 2026-03-10T11:25:29.212601+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 35 KiB/s, 0 objects/s recovering 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.350645+0000 mon.a (mon.0) 448 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: cluster 2026-03-10T11:25:29.350726+0000 mon.a (mon.0) 449 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.350788+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.352057+0000 mon.a (mon.0) 451 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.353319+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.574180+0000 mgr.y (mgr.14152) 103 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.575500+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.577066+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:30 vm07 bash[17804]: audit 2026-03-10T11:25:29.577510+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:30.698 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:25:30 vm07 bash[24010]: debug 2026-03-10T11:25:30.362+0000 7efe0d098700 -1 osd.5 0 waiting for initial osdmap 2026-03-10T11:25:30.698 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:25:30 vm07 bash[24010]: debug 2026-03-10T11:25:30.370+0000 7efe09232700 -1 osd.5 34 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: cluster 2026-03-10T11:25:29.212601+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 35 KiB/s, 0 objects/s recovering 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.350645+0000 mon.a (mon.0) 448 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: cluster 2026-03-10T11:25:29.350726+0000 mon.a (mon.0) 449 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.350788+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.352057+0000 mon.a (mon.0) 451 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.353319+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.574180+0000 mgr.y (mgr.14152) 103 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.575500+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.577066+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:30.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:30 vm05 bash[22470]: audit 2026-03-10T11:25:29.577510+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: cluster 2026-03-10T11:25:29.212601+0000 mgr.y (mgr.14152) 102 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail; 35 KiB/s, 0 objects/s recovering 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.350645+0000 mon.a (mon.0) 448 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: cluster 2026-03-10T11:25:29.350726+0000 mon.a (mon.0) 449 : cluster [DBG] osdmap e33: 6 total, 5 up, 6 in 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.350788+0000 mon.a (mon.0) 450 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.352057+0000 mon.a (mon.0) 451 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.353319+0000 mon.b (mon.2) 18 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.574180+0000 mgr.y (mgr.14152) 103 : audit [DBG] from='client.24227 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.575500+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.577066+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:30.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:30 vm05 bash[17453]: audit 2026-03-10T11:25:29.577510+0000 mon.a (mon.0) 454 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: cluster 2026-03-10T11:25:29.395183+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: cluster 2026-03-10T11:25:29.395249+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: audit 2026-03-10T11:25:30.355599+0000 mon.a (mon.0) 455 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: cluster 2026-03-10T11:25:30.355798+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: audit 2026-03-10T11:25:30.356597+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: audit 2026-03-10T11:25:30.366586+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: audit 2026-03-10T11:25:31.240347+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: audit 2026-03-10T11:25:31.256068+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:25:31.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:31 vm07 bash[17804]: audit 2026-03-10T11:25:31.358840+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: cluster 2026-03-10T11:25:29.395183+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: cluster 2026-03-10T11:25:29.395249+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: audit 2026-03-10T11:25:30.355599+0000 mon.a (mon.0) 455 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: cluster 2026-03-10T11:25:30.355798+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: audit 2026-03-10T11:25:30.356597+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: audit 2026-03-10T11:25:30.366586+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: audit 2026-03-10T11:25:31.240347+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: audit 2026-03-10T11:25:31.256068+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:25:31.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:31 vm05 bash[22470]: audit 2026-03-10T11:25:31.358840+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: cluster 2026-03-10T11:25:29.395183+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: cluster 2026-03-10T11:25:29.395249+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: audit 2026-03-10T11:25:30.355599+0000 mon.a (mon.0) 455 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: cluster 2026-03-10T11:25:30.355798+0000 mon.a (mon.0) 456 : cluster [DBG] osdmap e34: 6 total, 5 up, 6 in 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: audit 2026-03-10T11:25:30.356597+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: audit 2026-03-10T11:25:30.366586+0000 mon.a (mon.0) 458 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: audit 2026-03-10T11:25:31.240347+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: audit 2026-03-10T11:25:31.256068+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:25:31.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:31 vm05 bash[17453]: audit 2026-03-10T11:25:31.358840+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:32.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:32 vm07 bash[17804]: cluster 2026-03-10T11:25:31.212872+0000 mgr.y (mgr.14152) 104 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-10T11:25:32.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:32 vm07 bash[17804]: cluster 2026-03-10T11:25:31.368957+0000 mon.a (mon.0) 462 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300] boot 2026-03-10T11:25:32.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:32 vm07 bash[17804]: cluster 2026-03-10T11:25:31.370033+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e35: 6 total, 6 up, 6 in 2026-03-10T11:25:32.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:32 vm07 bash[17804]: audit 2026-03-10T11:25:31.371210+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:32.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:32 vm07 bash[17804]: cluster 2026-03-10T11:25:32.325235+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:32 vm05 bash[22470]: cluster 2026-03-10T11:25:31.212872+0000 mgr.y (mgr.14152) 104 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:32 vm05 bash[22470]: cluster 2026-03-10T11:25:31.368957+0000 mon.a (mon.0) 462 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300] boot 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:32 vm05 bash[22470]: cluster 2026-03-10T11:25:31.370033+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e35: 6 total, 6 up, 6 in 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:32 vm05 bash[22470]: audit 2026-03-10T11:25:31.371210+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:32 vm05 bash[22470]: cluster 2026-03-10T11:25:32.325235+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:32 vm05 bash[17453]: cluster 2026-03-10T11:25:31.212872+0000 mgr.y (mgr.14152) 104 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 29 MiB used, 100 GiB / 100 GiB avail 2026-03-10T11:25:32.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:32 vm05 bash[17453]: cluster 2026-03-10T11:25:31.368957+0000 mon.a (mon.0) 462 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1013528300,v1:192.168.123.107:6809/1013528300] boot 2026-03-10T11:25:32.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:32 vm05 bash[17453]: cluster 2026-03-10T11:25:31.370033+0000 mon.a (mon.0) 463 : cluster [DBG] osdmap e35: 6 total, 6 up, 6 in 2026-03-10T11:25:32.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:32 vm05 bash[17453]: audit 2026-03-10T11:25:31.371210+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:25:32.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:32 vm05 bash[17453]: cluster 2026-03-10T11:25:32.325235+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e36: 6 total, 6 up, 6 in 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: cephadm 2026-03-10T11:25:32.749658+0000 mgr.y (mgr.14152) 105 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:32.757018+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:32.758064+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:32.758699+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: cephadm 2026-03-10T11:25:32.759196+0000 mgr.y (mgr.14152) 106 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: cephadm 2026-03-10T11:25:32.759812+0000 mgr.y (mgr.14152) 107 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:32.763912+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: cluster 2026-03-10T11:25:33.335767+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:33.676506+0000 mon.a (mon.0) 471 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]: dispatch 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:33.677541+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.107:0/1481943683' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]: dispatch 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:33.690438+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]': finished 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: cluster 2026-03-10T11:25:33.690581+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-10T11:25:34.096 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:33 vm07 bash[17804]: audit 2026-03-10T11:25:33.691048+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: cephadm 2026-03-10T11:25:32.749658+0000 mgr.y (mgr.14152) 105 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:32.757018+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:32.758064+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:32.758699+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: cephadm 2026-03-10T11:25:32.759196+0000 mgr.y (mgr.14152) 106 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: cephadm 2026-03-10T11:25:32.759812+0000 mgr.y (mgr.14152) 107 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:32.763912+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: cluster 2026-03-10T11:25:33.335767+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T11:25:34.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:33.676506+0000 mon.a (mon.0) 471 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:33.677541+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.107:0/1481943683' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:33.690438+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]': finished 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: cluster 2026-03-10T11:25:33.690581+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:33 vm05 bash[22470]: audit 2026-03-10T11:25:33.691048+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: cephadm 2026-03-10T11:25:32.749658+0000 mgr.y (mgr.14152) 105 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:32.757018+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:32.758064+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:32.758699+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: cephadm 2026-03-10T11:25:32.759196+0000 mgr.y (mgr.14152) 106 : cephadm [INF] Adjusting osd_memory_target on vm07 to 227.8M 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: cephadm 2026-03-10T11:25:32.759812+0000 mgr.y (mgr.14152) 107 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:32.763912+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: cluster 2026-03-10T11:25:33.335767+0000 mon.a (mon.0) 470 : cluster [DBG] osdmap e37: 6 total, 6 up, 6 in 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:33.676506+0000 mon.a (mon.0) 471 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:33.677541+0000 mon.b (mon.2) 19 : audit [INF] from='client.? 192.168.123.107:0/1481943683' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]: dispatch 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:33.690438+0000 mon.a (mon.0) 472 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "783416c9-d1a2-4d8f-91e5-b6343f3a3d0a"}]': finished 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: cluster 2026-03-10T11:25:33.690581+0000 mon.a (mon.0) 473 : cluster [DBG] osdmap e38: 7 total, 6 up, 7 in 2026-03-10T11:25:34.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:33 vm05 bash[17453]: audit 2026-03-10T11:25:33.691048+0000 mon.a (mon.0) 474 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:35.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:34 vm05 bash[22470]: cluster 2026-03-10T11:25:33.213162+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v87: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:35.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:34 vm05 bash[22470]: audit 2026-03-10T11:25:34.397178+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.107:0/1750756793' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:35.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:34 vm05 bash[17453]: cluster 2026-03-10T11:25:33.213162+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v87: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:35.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:34 vm05 bash[17453]: audit 2026-03-10T11:25:34.397178+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.107:0/1750756793' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:35.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:34 vm07 bash[17804]: cluster 2026-03-10T11:25:33.213162+0000 mgr.y (mgr.14152) 108 : cluster [DBG] pgmap v87: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:35.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:34 vm07 bash[17804]: audit 2026-03-10T11:25:34.397178+0000 mon.b (mon.2) 20 : audit [DBG] from='client.? 192.168.123.107:0/1750756793' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:37.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:36 vm05 bash[22470]: cluster 2026-03-10T11:25:35.213498+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v90: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:37.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:36 vm05 bash[17453]: cluster 2026-03-10T11:25:35.213498+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v90: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:37.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:36 vm07 bash[17804]: cluster 2026-03-10T11:25:35.213498+0000 mgr.y (mgr.14152) 109 : cluster [DBG] pgmap v90: 1 pgs: 1 peering; 0 B data, 34 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:39.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:38 vm05 bash[22470]: cluster 2026-03-10T11:25:37.213738+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v91: 1 pgs: 1 peering; 0 B data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:39.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:38 vm05 bash[17453]: cluster 2026-03-10T11:25:37.213738+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v91: 1 pgs: 1 peering; 0 B data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:39.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:38 vm07 bash[17804]: cluster 2026-03-10T11:25:37.213738+0000 mgr.y (mgr.14152) 110 : cluster [DBG] pgmap v91: 1 pgs: 1 peering; 0 B data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:40.772 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:40.773 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:25:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:41.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:40 vm05 bash[22470]: cluster 2026-03-10T11:25:39.213961+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v92: 1 pgs: 1 active+recovering; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:41.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:40 vm05 bash[22470]: audit 2026-03-10T11:25:39.948826+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:25:41.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:40 vm05 bash[22470]: audit 2026-03-10T11:25:39.949316+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:41.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:40 vm05 bash[17453]: cluster 2026-03-10T11:25:39.213961+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v92: 1 pgs: 1 active+recovering; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:41.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:40 vm05 bash[17453]: audit 2026-03-10T11:25:39.948826+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:25:41.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:40 vm05 bash[17453]: audit 2026-03-10T11:25:39.949316+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:40 vm07 bash[17804]: cluster 2026-03-10T11:25:39.213961+0000 mgr.y (mgr.14152) 111 : cluster [DBG] pgmap v92: 1 pgs: 1 active+recovering; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:40 vm07 bash[17804]: audit 2026-03-10T11:25:39.948826+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:25:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:40 vm07 bash[17804]: audit 2026-03-10T11:25:39.949316+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:42.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:41 vm05 bash[22470]: cephadm 2026-03-10T11:25:39.949704+0000 mgr.y (mgr.14152) 112 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:41 vm05 bash[22470]: audit 2026-03-10T11:25:40.791870+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:41 vm05 bash[22470]: audit 2026-03-10T11:25:40.793953+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:41 vm05 bash[22470]: audit 2026-03-10T11:25:40.795393+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:41 vm05 bash[22470]: audit 2026-03-10T11:25:40.796035+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:41 vm05 bash[17453]: cephadm 2026-03-10T11:25:39.949704+0000 mgr.y (mgr.14152) 112 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:41 vm05 bash[17453]: audit 2026-03-10T11:25:40.791870+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:41 vm05 bash[17453]: audit 2026-03-10T11:25:40.793953+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:41 vm05 bash[17453]: audit 2026-03-10T11:25:40.795393+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:42.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:41 vm05 bash[17453]: audit 2026-03-10T11:25:40.796035+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:42.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:41 vm07 bash[17804]: cephadm 2026-03-10T11:25:39.949704+0000 mgr.y (mgr.14152) 112 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:25:42.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:41 vm07 bash[17804]: audit 2026-03-10T11:25:40.791870+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:42.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:41 vm07 bash[17804]: audit 2026-03-10T11:25:40.793953+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:42.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:41 vm07 bash[17804]: audit 2026-03-10T11:25:40.795393+0000 mon.a (mon.0) 479 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:42.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:41 vm07 bash[17804]: audit 2026-03-10T11:25:40.796035+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:43.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:42 vm05 bash[22470]: cluster 2026-03-10T11:25:41.214183+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v93: 1 pgs: 1 active+recovering; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:43.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:42 vm05 bash[17453]: cluster 2026-03-10T11:25:41.214183+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v93: 1 pgs: 1 active+recovering; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:43.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:42 vm07 bash[17804]: cluster 2026-03-10T11:25:41.214183+0000 mgr.y (mgr.14152) 113 : cluster [DBG] pgmap v93: 1 pgs: 1 active+recovering; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:44.191 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 6 on host 'vm07' 2026-03-10T11:25:44.266 DEBUG:teuthology.orchestra.run.vm07:osd.6> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.6.service 2026-03-10T11:25:44.267 INFO:tasks.cephadm:Deploying osd.7 on vm07 with /dev/vdb... 2026-03-10T11:25:44.267 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- lvm zap /dev/vdb 2026-03-10T11:25:44.942 INFO:teuthology.orchestra.run.vm07.stdout: 2026-03-10T11:25:44.952 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch daemon add osd vm07:/dev/vdb 2026-03-10T11:25:45.087 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: cluster 2026-03-10T11:25:43.214426+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:45.087 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:43.757863+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:43.906023+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:43.986757+0000 mon.a (mon.0) 483 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:43.987939+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:44.185870+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:44.223288+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:44.224172+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:45.088 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:44 vm07 bash[17804]: audit 2026-03-10T11:25:44.224611+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: cluster 2026-03-10T11:25:43.214426+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:43.757863+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:43.906023+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:43.986757+0000 mon.a (mon.0) 483 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:43.987939+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:44.185870+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:44.223288+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:44.224172+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:44 vm05 bash[22470]: audit 2026-03-10T11:25:44.224611+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: cluster 2026-03-10T11:25:43.214426+0000 mgr.y (mgr.14152) 114 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:43.757863+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:43.906023+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:43.986757+0000 mon.a (mon.0) 483 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:43.987939+0000 mon.b (mon.2) 21 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:44.185870+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:44.223288+0000 mon.a (mon.0) 485 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:44.224172+0000 mon.a (mon.0) 486 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:44 vm05 bash[17453]: audit 2026-03-10T11:25:44.224611+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:44.913139+0000 mon.a (mon.0) 488 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: cluster 2026-03-10T11:25:44.913247+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:44.913425+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:44.915011+0000 mon.a (mon.0) 491 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:44.916272+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: cluster 2026-03-10T11:25:45.214720+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:45.381460+0000 mgr.y (mgr.14152) 116 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:45.382854+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:45.384211+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:45 vm07 bash[17804]: audit 2026-03-10T11:25:45.384631+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:46.198 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:25:45 vm07 bash[27159]: debug 2026-03-10T11:25:45.918+0000 7fda57561700 -1 osd.6 0 waiting for initial osdmap 2026-03-10T11:25:46.198 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:25:45 vm07 bash[27159]: debug 2026-03-10T11:25:45.926+0000 7fda52efa700 -1 osd.6 40 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:44.913139+0000 mon.a (mon.0) 488 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: cluster 2026-03-10T11:25:44.913247+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:44.913425+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:44.915011+0000 mon.a (mon.0) 491 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:44.916272+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: cluster 2026-03-10T11:25:45.214720+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:45.381460+0000 mgr.y (mgr.14152) 116 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:46.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:45.382854+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:45.384211+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:45 vm05 bash[22470]: audit 2026-03-10T11:25:45.384631+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:44.913139+0000 mon.a (mon.0) 488 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: cluster 2026-03-10T11:25:44.913247+0000 mon.a (mon.0) 489 : cluster [DBG] osdmap e39: 7 total, 6 up, 7 in 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:44.913425+0000 mon.a (mon.0) 490 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:44.915011+0000 mon.a (mon.0) 491 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:44.916272+0000 mon.b (mon.2) 22 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: cluster 2026-03-10T11:25:45.214720+0000 mgr.y (mgr.14152) 115 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 35 MiB used, 120 GiB / 120 GiB avail 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:45.381460+0000 mgr.y (mgr.14152) 116 : audit [DBG] from='client.24259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm07:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:45.382854+0000 mon.a (mon.0) 492 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:45.384211+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-10T11:25:46.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:45 vm05 bash[17453]: audit 2026-03-10T11:25:45.384631+0000 mon.a (mon.0) 494 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: audit 2026-03-10T11:25:45.919360+0000 mon.a (mon.0) 495 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: cluster 2026-03-10T11:25:45.919484+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: audit 2026-03-10T11:25:45.922256+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: audit 2026-03-10T11:25:45.925819+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: cluster 2026-03-10T11:25:46.921804+0000 mon.a (mon.0) 499 : cluster [INF] osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116] boot 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: cluster 2026-03-10T11:25:46.921845+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-10T11:25:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:46 vm07 bash[17804]: audit 2026-03-10T11:25:46.922071+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: audit 2026-03-10T11:25:45.919360+0000 mon.a (mon.0) 495 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: cluster 2026-03-10T11:25:45.919484+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: audit 2026-03-10T11:25:45.922256+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: audit 2026-03-10T11:25:45.925819+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: cluster 2026-03-10T11:25:46.921804+0000 mon.a (mon.0) 499 : cluster [INF] osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116] boot 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: cluster 2026-03-10T11:25:46.921845+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:46 vm05 bash[22470]: audit 2026-03-10T11:25:46.922071+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: audit 2026-03-10T11:25:45.919360+0000 mon.a (mon.0) 495 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: cluster 2026-03-10T11:25:45.919484+0000 mon.a (mon.0) 496 : cluster [DBG] osdmap e40: 7 total, 6 up, 7 in 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: audit 2026-03-10T11:25:45.922256+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: audit 2026-03-10T11:25:45.925819+0000 mon.a (mon.0) 498 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: cluster 2026-03-10T11:25:46.921804+0000 mon.a (mon.0) 499 : cluster [INF] osd.6 [v2:192.168.123.107:6816/319224116,v1:192.168.123.107:6817/319224116] boot 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: cluster 2026-03-10T11:25:46.921845+0000 mon.a (mon.0) 500 : cluster [DBG] osdmap e41: 7 total, 7 up, 7 in 2026-03-10T11:25:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:46 vm05 bash[17453]: audit 2026-03-10T11:25:46.922071+0000 mon.a (mon.0) 501 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:25:48.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:47 vm07 bash[17804]: cluster 2026-03-10T11:25:44.979413+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:48.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:47 vm07 bash[17804]: cluster 2026-03-10T11:25:44.979504+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:48.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:47 vm07 bash[17804]: cluster 2026-03-10T11:25:47.214974+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-10T11:25:48.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:47 vm07 bash[17804]: cluster 2026-03-10T11:25:47.330672+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-10T11:25:48.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:47 vm05 bash[22470]: cluster 2026-03-10T11:25:44.979413+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:48.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:47 vm05 bash[22470]: cluster 2026-03-10T11:25:44.979504+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:48.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:47 vm05 bash[22470]: cluster 2026-03-10T11:25:47.214974+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-10T11:25:48.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:47 vm05 bash[22470]: cluster 2026-03-10T11:25:47.330672+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-10T11:25:48.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:47 vm05 bash[17453]: cluster 2026-03-10T11:25:44.979413+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:25:48.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:47 vm05 bash[17453]: cluster 2026-03-10T11:25:44.979504+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:25:48.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:47 vm05 bash[17453]: cluster 2026-03-10T11:25:47.214974+0000 mgr.y (mgr.14152) 117 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail; 53 KiB/s, 0 objects/s recovering 2026-03-10T11:25:48.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:47 vm05 bash[17453]: cluster 2026-03-10T11:25:47.330672+0000 mon.a (mon.0) 502 : cluster [DBG] osdmap e42: 7 total, 7 up, 7 in 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: cluster 2026-03-10T11:25:48.341463+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: cephadm 2026-03-10T11:25:48.599512+0000 mgr.y (mgr.14152) 118 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: audit 2026-03-10T11:25:48.607765+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: audit 2026-03-10T11:25:48.608799+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: audit 2026-03-10T11:25:48.609339+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: audit 2026-03-10T11:25:48.609844+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: cephadm 2026-03-10T11:25:48.610263+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: cephadm 2026-03-10T11:25:48.610780+0000 mgr.y (mgr.14152) 120 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:49 vm05 bash[22470]: audit 2026-03-10T11:25:48.615677+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: cluster 2026-03-10T11:25:48.341463+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: cephadm 2026-03-10T11:25:48.599512+0000 mgr.y (mgr.14152) 118 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: audit 2026-03-10T11:25:48.607765+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: audit 2026-03-10T11:25:48.608799+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: audit 2026-03-10T11:25:48.609339+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: audit 2026-03-10T11:25:48.609844+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: cephadm 2026-03-10T11:25:48.610263+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: cephadm 2026-03-10T11:25:48.610780+0000 mgr.y (mgr.14152) 120 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T11:25:49.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:49 vm05 bash[17453]: audit 2026-03-10T11:25:48.615677+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: cluster 2026-03-10T11:25:48.341463+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e43: 7 total, 7 up, 7 in 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: cephadm 2026-03-10T11:25:48.599512+0000 mgr.y (mgr.14152) 118 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: audit 2026-03-10T11:25:48.607765+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: audit 2026-03-10T11:25:48.608799+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: audit 2026-03-10T11:25:48.609339+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: audit 2026-03-10T11:25:48.609844+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: cephadm 2026-03-10T11:25:48.610263+0000 mgr.y (mgr.14152) 119 : cephadm [INF] Adjusting osd_memory_target on vm07 to 151.9M 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: cephadm 2026-03-10T11:25:48.610780+0000 mgr.y (mgr.14152) 120 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-10T11:25:49.654 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:49 vm07 bash[17804]: audit 2026-03-10T11:25:48.615677+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:50.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: cluster 2026-03-10T11:25:49.215273+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: audit 2026-03-10T11:25:49.541799+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/2333036227' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: audit 2026-03-10T11:25:49.542594+0000 mon.a (mon.0) 509 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: audit 2026-03-10T11:25:49.553585+0000 mon.a (mon.0) 510 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]': finished 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: cluster 2026-03-10T11:25:49.553754+0000 mon.a (mon.0) 511 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: audit 2026-03-10T11:25:49.553942+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:50 vm05 bash[17453]: audit 2026-03-10T11:25:50.258703+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.107:0/358998414' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: cluster 2026-03-10T11:25:49.215273+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: audit 2026-03-10T11:25:49.541799+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/2333036227' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: audit 2026-03-10T11:25:49.542594+0000 mon.a (mon.0) 509 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: audit 2026-03-10T11:25:49.553585+0000 mon.a (mon.0) 510 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]': finished 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: cluster 2026-03-10T11:25:49.553754+0000 mon.a (mon.0) 511 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: audit 2026-03-10T11:25:49.553942+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:25:50.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:50 vm05 bash[22470]: audit 2026-03-10T11:25:50.258703+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.107:0/358998414' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: cluster 2026-03-10T11:25:49.215273+0000 mgr.y (mgr.14152) 121 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 40 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: audit 2026-03-10T11:25:49.541799+0000 mon.c (mon.1) 14 : audit [INF] from='client.? 192.168.123.107:0/2333036227' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]: dispatch 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: audit 2026-03-10T11:25:49.542594+0000 mon.a (mon.0) 509 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]: dispatch 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: audit 2026-03-10T11:25:49.553585+0000 mon.a (mon.0) 510 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d3a17b00-d9f4-4951-b587-40f724c9827b"}]': finished 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: cluster 2026-03-10T11:25:49.553754+0000 mon.a (mon.0) 511 : cluster [DBG] osdmap e44: 8 total, 7 up, 8 in 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: audit 2026-03-10T11:25:49.553942+0000 mon.a (mon.0) 512 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:25:50.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:50 vm07 bash[17804]: audit 2026-03-10T11:25:50.258703+0000 mon.b (mon.2) 23 : audit [DBG] from='client.? 192.168.123.107:0/358998414' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-10T11:25:52.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:52 vm07 bash[17804]: cluster 2026-03-10T11:25:51.215615+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:52.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:52 vm05 bash[22470]: cluster 2026-03-10T11:25:51.215615+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:52.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:52 vm05 bash[17453]: cluster 2026-03-10T11:25:51.215615+0000 mgr.y (mgr.14152) 122 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:54.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:54 vm07 bash[17804]: cluster 2026-03-10T11:25:53.215931+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v105: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:54.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:54 vm05 bash[17453]: cluster 2026-03-10T11:25:53.215931+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v105: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:54.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:54 vm05 bash[22470]: cluster 2026-03-10T11:25:53.215931+0000 mgr.y (mgr.14152) 123 : cluster [DBG] pgmap v105: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:56.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:56 vm07 bash[17804]: cluster 2026-03-10T11:25:55.216162+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v106: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:56.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:56 vm07 bash[17804]: audit 2026-03-10T11:25:55.918816+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:25:56.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:56 vm07 bash[17804]: audit 2026-03-10T11:25:55.919362+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:56.806 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.806 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.807 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.807 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:25:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:25:56.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:56 vm05 bash[22470]: cluster 2026-03-10T11:25:55.216162+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v106: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:56.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:56 vm05 bash[22470]: audit 2026-03-10T11:25:55.918816+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:25:56.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:56 vm05 bash[22470]: audit 2026-03-10T11:25:55.919362+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:56.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:56 vm05 bash[17453]: cluster 2026-03-10T11:25:55.216162+0000 mgr.y (mgr.14152) 124 : cluster [DBG] pgmap v106: 1 pgs: 1 active+recovering; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:56.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:56 vm05 bash[17453]: audit 2026-03-10T11:25:55.918816+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:25:56.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:56 vm05 bash[17453]: audit 2026-03-10T11:25:55.919362+0000 mon.a (mon.0) 514 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:58.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:57 vm05 bash[22470]: cephadm 2026-03-10T11:25:55.919786+0000 mgr.y (mgr.14152) 125 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:25:58.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:57 vm05 bash[22470]: audit 2026-03-10T11:25:56.825587+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:58.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:57 vm05 bash[22470]: audit 2026-03-10T11:25:56.843983+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:58.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:57 vm05 bash[22470]: audit 2026-03-10T11:25:56.844880+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:58.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:57 vm05 bash[22470]: audit 2026-03-10T11:25:56.845635+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:58.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:57 vm05 bash[17453]: cephadm 2026-03-10T11:25:55.919786+0000 mgr.y (mgr.14152) 125 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:25:58.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:57 vm05 bash[17453]: audit 2026-03-10T11:25:56.825587+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:58.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:57 vm05 bash[17453]: audit 2026-03-10T11:25:56.843983+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:58.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:57 vm05 bash[17453]: audit 2026-03-10T11:25:56.844880+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:58.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:57 vm05 bash[17453]: audit 2026-03-10T11:25:56.845635+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:58.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:57 vm07 bash[17804]: cephadm 2026-03-10T11:25:55.919786+0000 mgr.y (mgr.14152) 125 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:25:58.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:57 vm07 bash[17804]: audit 2026-03-10T11:25:56.825587+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:25:58.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:57 vm07 bash[17804]: audit 2026-03-10T11:25:56.843983+0000 mon.a (mon.0) 516 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:25:58.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:57 vm07 bash[17804]: audit 2026-03-10T11:25:56.844880+0000 mon.a (mon.0) 517 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:25:58.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:57 vm07 bash[17804]: audit 2026-03-10T11:25:56.845635+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:25:59.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:25:58 vm05 bash[22470]: cluster 2026-03-10T11:25:57.216432+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:59.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:25:58 vm05 bash[17453]: cluster 2026-03-10T11:25:57.216432+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:25:59.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:25:58 vm07 bash[17804]: cluster 2026-03-10T11:25:57.216432+0000 mgr.y (mgr.14152) 126 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 42 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:00.364 INFO:teuthology.orchestra.run.vm07.stdout:Created osd(s) 7 on host 'vm07' 2026-03-10T11:26:00.436 DEBUG:teuthology.orchestra.run.vm07:osd.7> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.7.service 2026-03-10T11:26:00.437 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-10T11:26:00.437 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd stat -f json 2026-03-10T11:26:00.878 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:00.932 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":45,"num_osds":8,"num_up_osds":7,"osd_up_since":1773141946,"num_in_osds":8,"osd_in_since":1773141949,"num_remapped_pgs":0} 2026-03-10T11:26:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: cluster 2026-03-10T11:25:59.216718+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:25:59.862280+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:25:59.866563+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:26:00.047217+0000 mon.a (mon.0) 521 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:26:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:26:00.355823+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:26:00.382434+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:26:00.383290+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:00 vm05 bash[22470]: audit 2026-03-10T11:26:00.383782+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: cluster 2026-03-10T11:25:59.216718+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:25:59.862280+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:25:59.866563+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:26:00.047217+0000 mon.a (mon.0) 521 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:26:00.355823+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:26:00.382434+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:26:00.383290+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:00 vm05 bash[17453]: audit 2026-03-10T11:26:00.383782+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: cluster 2026-03-10T11:25:59.216718+0000 mgr.y (mgr.14152) 127 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:25:59.862280+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:25:59.866563+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:26:00.047217+0000 mon.a (mon.0) 521 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:26:00.355823+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:26:00.382434+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:26:00.383290+0000 mon.a (mon.0) 524 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:00 vm07 bash[17804]: audit 2026-03-10T11:26:00.383782+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:01.933 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd stat -f json 2026-03-10T11:26:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:01 vm07 bash[17804]: audit 2026-03-10T11:26:00.875279+0000 mon.a (mon.0) 526 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:26:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:01 vm07 bash[17804]: cluster 2026-03-10T11:26:00.875372+0000 mon.a (mon.0) 527 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-10T11:26:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:01 vm07 bash[17804]: audit 2026-03-10T11:26:00.875442+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:01 vm07 bash[17804]: audit 2026-03-10T11:26:00.875551+0000 mon.a (mon.0) 529 : audit [DBG] from='client.? 192.168.123.105:0/568568337' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:26:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:01 vm07 bash[17804]: audit 2026-03-10T11:26:00.877089+0000 mon.a (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:26:02.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:01 vm07 bash[17804]: cluster 2026-03-10T11:26:01.217050+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:02.199 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:01 vm07 bash[30341]: debug 2026-03-10T11:26:01.878+0000 7f15f1982700 -1 osd.7 0 waiting for initial osdmap 2026-03-10T11:26:02.199 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:01 vm07 bash[30341]: debug 2026-03-10T11:26:01.894+0000 7f15ebb18700 -1 osd.7 46 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:01 vm05 bash[17453]: audit 2026-03-10T11:26:00.875279+0000 mon.a (mon.0) 526 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:01 vm05 bash[17453]: cluster 2026-03-10T11:26:00.875372+0000 mon.a (mon.0) 527 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:01 vm05 bash[17453]: audit 2026-03-10T11:26:00.875442+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:01 vm05 bash[17453]: audit 2026-03-10T11:26:00.875551+0000 mon.a (mon.0) 529 : audit [DBG] from='client.? 192.168.123.105:0/568568337' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:01 vm05 bash[17453]: audit 2026-03-10T11:26:00.877089+0000 mon.a (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:01 vm05 bash[17453]: cluster 2026-03-10T11:26:01.217050+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:01 vm05 bash[22470]: audit 2026-03-10T11:26:00.875279+0000 mon.a (mon.0) 526 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:01 vm05 bash[22470]: cluster 2026-03-10T11:26:00.875372+0000 mon.a (mon.0) 527 : cluster [DBG] osdmap e45: 8 total, 7 up, 8 in 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:01 vm05 bash[22470]: audit 2026-03-10T11:26:00.875442+0000 mon.a (mon.0) 528 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:01 vm05 bash[22470]: audit 2026-03-10T11:26:00.875551+0000 mon.a (mon.0) 529 : audit [DBG] from='client.? 192.168.123.105:0/568568337' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:01 vm05 bash[22470]: audit 2026-03-10T11:26:00.877089+0000 mon.a (mon.0) 530 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:26:02.302 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:01 vm05 bash[22470]: cluster 2026-03-10T11:26:01.217050+0000 mgr.y (mgr.14152) 128 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 41 MiB used, 140 GiB / 140 GiB avail 2026-03-10T11:26:02.683 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:02.766 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":47,"num_osds":8,"num_up_osds":8,"osd_up_since":1773141962,"num_in_osds":8,"osd_in_since":1773141949,"num_remapped_pgs":1} 2026-03-10T11:26:02.766 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd dump --format=json 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: audit 2026-03-10T11:26:01.875529+0000 mon.a (mon.0) 531 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: cluster 2026-03-10T11:26:01.876458+0000 mon.a (mon.0) 532 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: audit 2026-03-10T11:26:01.879799+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: audit 2026-03-10T11:26:01.882132+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: cluster 2026-03-10T11:26:02.330794+0000 mon.a (mon.0) 535 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210] boot 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: cluster 2026-03-10T11:26:02.330926+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: audit 2026-03-10T11:26:02.337174+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:02 vm05 bash[17453]: audit 2026-03-10T11:26:02.680155+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.105:0/528593081' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: audit 2026-03-10T11:26:01.875529+0000 mon.a (mon.0) 531 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: cluster 2026-03-10T11:26:01.876458+0000 mon.a (mon.0) 532 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: audit 2026-03-10T11:26:01.879799+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: audit 2026-03-10T11:26:01.882132+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: cluster 2026-03-10T11:26:02.330794+0000 mon.a (mon.0) 535 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210] boot 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: cluster 2026-03-10T11:26:02.330926+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: audit 2026-03-10T11:26:02.337174+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:02 vm05 bash[22470]: audit 2026-03-10T11:26:02.680155+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.105:0/528593081' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: audit 2026-03-10T11:26:01.875529+0000 mon.a (mon.0) 531 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210]' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]': finished 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: cluster 2026-03-10T11:26:01.876458+0000 mon.a (mon.0) 532 : cluster [DBG] osdmap e46: 8 total, 7 up, 8 in 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: audit 2026-03-10T11:26:01.879799+0000 mon.a (mon.0) 533 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: audit 2026-03-10T11:26:01.882132+0000 mon.a (mon.0) 534 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: cluster 2026-03-10T11:26:02.330794+0000 mon.a (mon.0) 535 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3044827210,v1:192.168.123.107:6825/3044827210] boot 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: cluster 2026-03-10T11:26:02.330926+0000 mon.a (mon.0) 536 : cluster [DBG] osdmap e47: 8 total, 8 up, 8 in 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: audit 2026-03-10T11:26:02.337174+0000 mon.a (mon.0) 537 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:03.184 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:02 vm07 bash[17804]: audit 2026-03-10T11:26:02.680155+0000 mon.a (mon.0) 538 : audit [DBG] from='client.? 192.168.123.105:0/528593081' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-10T11:26:04.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:03 vm07 bash[17804]: cluster 2026-03-10T11:26:01.088326+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:26:04.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:03 vm07 bash[17804]: cluster 2026-03-10T11:26:01.088437+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:26:04.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:03 vm07 bash[17804]: cluster 2026-03-10T11:26:03.217311+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:04.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:03 vm07 bash[17804]: cluster 2026-03-10T11:26:03.389009+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:03 vm05 bash[22470]: cluster 2026-03-10T11:26:01.088326+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:03 vm05 bash[22470]: cluster 2026-03-10T11:26:01.088437+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:03 vm05 bash[22470]: cluster 2026-03-10T11:26:03.217311+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:03 vm05 bash[22470]: cluster 2026-03-10T11:26:03.389009+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:03 vm05 bash[17453]: cluster 2026-03-10T11:26:01.088326+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:03 vm05 bash[17453]: cluster 2026-03-10T11:26:01.088437+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:03 vm05 bash[17453]: cluster 2026-03-10T11:26:03.217311+0000 mgr.y (mgr.14152) 129 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:04.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:03 vm05 bash[17453]: cluster 2026-03-10T11:26:03.389009+0000 mon.a (mon.0) 539 : cluster [DBG] osdmap e48: 8 total, 8 up, 8 in 2026-03-10T11:26:05.383 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: cluster 2026-03-10T11:26:04.393561+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: audit 2026-03-10T11:26:04.902185+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: audit 2026-03-10T11:26:04.903223+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: audit 2026-03-10T11:26:04.903973+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: audit 2026-03-10T11:26:04.904568+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: audit 2026-03-10T11:26:04.904991+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:05 vm07 bash[17804]: audit 2026-03-10T11:26:04.910854+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:05.747 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:05.747 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":49,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","created":"2026-03-10T11:23:06.356940+0000","modified":"2026-03-10T11:26:04.384127+0000","last_up_change":"2026-03-10T11:26:02.324227+0000","last_in_change":"2026-03-10T11:25:49.543158+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T11:24:45.406653+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"0992e6dc-d298-462b-bccd-b74959342712","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6803","nonce":2004210335}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6805","nonce":2004210335}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6809","nonce":2004210335}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6807","nonce":2004210335}]},"public_addr":"192.168.123.105:6803/2004210335","cluster_addr":"192.168.123.105:6805/2004210335","heartbeat_back_addr":"192.168.123.105:6809/2004210335","heartbeat_front_addr":"192.168.123.105:6807/2004210335","state":["exists","up"]},{"osd":1,"uuid":"9cbc5424-3289-45dc-8763-da809c9c9e84","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":30,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6811","nonce":1089345282}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6813","nonce":1089345282}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6817","nonce":1089345282}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6815","nonce":1089345282}]},"public_addr":"192.168.123.105:6811/1089345282","cluster_addr":"192.168.123.105:6813/1089345282","heartbeat_back_addr":"192.168.123.105:6817/1089345282","heartbeat_front_addr":"192.168.123.105:6815/1089345282","state":["exists","up"]},{"osd":2,"uuid":"58079681-6944-4372-ab7d-0aa5717818bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6819","nonce":420660061}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6821","nonce":420660061}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6825","nonce":420660061}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6823","nonce":420660061}]},"public_addr":"192.168.123.105:6819/420660061","cluster_addr":"192.168.123.105:6821/420660061","heartbeat_back_addr":"192.168.123.105:6825/420660061","heartbeat_front_addr":"192.168.123.105:6823/420660061","state":["exists","up"]},{"osd":3,"uuid":"0e62b553-78b1-4fbe-870e-d68c1967e6be","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6827","nonce":311748923}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6829","nonce":311748923}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6832","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6833","nonce":311748923}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6831","nonce":311748923}]},"public_addr":"192.168.123.105:6827/311748923","cluster_addr":"192.168.123.105:6829/311748923","heartbeat_back_addr":"192.168.123.105:6833/311748923","heartbeat_front_addr":"192.168.123.105:6831/311748923","state":["exists","up"]},{"osd":4,"uuid":"5d2d7aab-4d36-465e-b574-aaa4de107693","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":29,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6801","nonce":774944665}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6803","nonce":774944665}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6807","nonce":774944665}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6805","nonce":774944665}]},"public_addr":"192.168.123.107:6801/774944665","cluster_addr":"192.168.123.107:6803/774944665","heartbeat_back_addr":"192.168.123.107:6807/774944665","heartbeat_front_addr":"192.168.123.107:6805/774944665","state":["exists","up"]},{"osd":5,"uuid":"dcefdca8-8af9-4aeb-9472-1fb1d076fa1e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":35,"up_thru":36,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6809","nonce":1013528300}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6811","nonce":1013528300}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6815","nonce":1013528300}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6813","nonce":1013528300}]},"public_addr":"192.168.123.107:6809/1013528300","cluster_addr":"192.168.123.107:6811/1013528300","heartbeat_back_addr":"192.168.123.107:6815/1013528300","heartbeat_front_addr":"192.168.123.107:6813/1013528300","state":["exists","up"]},{"osd":6,"uuid":"783416c9-d1a2-4d8f-91e5-b6343f3a3d0a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6817","nonce":319224116}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6819","nonce":319224116}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6823","nonce":319224116}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6821","nonce":319224116}]},"public_addr":"192.168.123.107:6817/319224116","cluster_addr":"192.168.123.107:6819/319224116","heartbeat_back_addr":"192.168.123.107:6823/319224116","heartbeat_front_addr":"192.168.123.107:6821/319224116","state":["exists","up"]},{"osd":7,"uuid":"d3a17b00-d9f4-4951-b587-40f724c9827b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6825","nonce":3044827210}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6827","nonce":3044827210}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6831","nonce":3044827210}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6829","nonce":3044827210}]},"public_addr":"192.168.123.107:6825/3044827210","cluster_addr":"192.168.123.107:6827/3044827210","heartbeat_back_addr":"192.168.123.107:6831/3044827210","heartbeat_front_addr":"192.168.123.107:6829/3044827210","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:12.150542+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:26.315392+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:42.990393+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:59.190256+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:13.846670+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:29.395250+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:44.979506+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:26:01.088439+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/1312851658":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6801/1110057132":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6800/1110057132":"2026-03-11T11:23:31.132317+0000","192.168.123.105:0/3902952517":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/3473116901":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6801/1953728704":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/4010853674":"2026-03-11T11:23:31.132317+0000","192.168.123.105:0/2723537270":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/3538663775":"2026-03-11T11:23:20.465744+0000","192.168.123.105:6800/1953728704":"2026-03-11T11:23:20.465744+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T11:26:05.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: cluster 2026-03-10T11:26:04.393561+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T11:26:05.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: audit 2026-03-10T11:26:04.902185+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:05.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: audit 2026-03-10T11:26:04.903223+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: audit 2026-03-10T11:26:04.903973+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: audit 2026-03-10T11:26:04.904568+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.758 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: audit 2026-03-10T11:26:04.904991+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:05 vm05 bash[17453]: audit 2026-03-10T11:26:04.910854+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: cluster 2026-03-10T11:26:04.393561+0000 mon.a (mon.0) 540 : cluster [DBG] osdmap e49: 8 total, 8 up, 8 in 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: audit 2026-03-10T11:26:04.902185+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: audit 2026-03-10T11:26:04.903223+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: audit 2026-03-10T11:26:04.903973+0000 mon.a (mon.0) 543 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: audit 2026-03-10T11:26:04.904568+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: audit 2026-03-10T11:26:04.904991+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:05.759 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:05 vm05 bash[22470]: audit 2026-03-10T11:26:04.910854+0000 mon.a (mon.0) 546 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:05.803 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-10T11:24:45.406653+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}}] 2026-03-10T11:26:05.804 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd pool get .mgr pg_num 2026-03-10T11:26:06.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:06 vm07 bash[17804]: cephadm 2026-03-10T11:26:04.893557+0000 mgr.y (mgr.14152) 130 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:26:06.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:06 vm07 bash[17804]: cephadm 2026-03-10T11:26:04.905348+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T11:26:06.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:06 vm07 bash[17804]: cephadm 2026-03-10T11:26:04.905786+0000 mgr.y (mgr.14152) 132 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T11:26:06.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:06 vm07 bash[17804]: cluster 2026-03-10T11:26:05.217686+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:06.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:06 vm07 bash[17804]: audit 2026-03-10T11:26:05.743461+0000 mon.c (mon.1) 15 : audit [DBG] from='client.? 192.168.123.105:0/2980847561' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:06 vm05 bash[22470]: cephadm 2026-03-10T11:26:04.893557+0000 mgr.y (mgr.14152) 130 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:06 vm05 bash[22470]: cephadm 2026-03-10T11:26:04.905348+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:06 vm05 bash[22470]: cephadm 2026-03-10T11:26:04.905786+0000 mgr.y (mgr.14152) 132 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:06 vm05 bash[22470]: cluster 2026-03-10T11:26:05.217686+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:06 vm05 bash[22470]: audit 2026-03-10T11:26:05.743461+0000 mon.c (mon.1) 15 : audit [DBG] from='client.? 192.168.123.105:0/2980847561' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:06 vm05 bash[17453]: cephadm 2026-03-10T11:26:04.893557+0000 mgr.y (mgr.14152) 130 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:26:06.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:06 vm05 bash[17453]: cephadm 2026-03-10T11:26:04.905348+0000 mgr.y (mgr.14152) 131 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T11:26:06.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:06 vm05 bash[17453]: cephadm 2026-03-10T11:26:04.905786+0000 mgr.y (mgr.14152) 132 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T11:26:06.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:06 vm05 bash[17453]: cluster 2026-03-10T11:26:05.217686+0000 mgr.y (mgr.14152) 133 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 47 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:06.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:06 vm05 bash[17453]: audit 2026-03-10T11:26:05.743461+0000 mon.c (mon.1) 15 : audit [DBG] from='client.? 192.168.123.105:0/2980847561' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:08.416 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:08.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:08 vm07 bash[17804]: cluster 2026-03-10T11:26:07.217939+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:08.771 INFO:teuthology.orchestra.run.vm05.stdout:pg_num: 1 2026-03-10T11:26:08.784 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:08 vm05 bash[17453]: cluster 2026-03-10T11:26:07.217939+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:08.784 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:08 vm05 bash[22470]: cluster 2026-03-10T11:26:07.217939+0000 mgr.y (mgr.14152) 134 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:08.828 INFO:tasks.cephadm:Adding prometheus.a on vm07 2026-03-10T11:26:08.828 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch apply prometheus '1;vm07=a' 2026-03-10T11:26:09.314 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled prometheus update... 2026-03-10T11:26:09.374 DEBUG:teuthology.orchestra.run.vm07:prometheus.a> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service 2026-03-10T11:26:09.375 INFO:tasks.cephadm:Adding node-exporter.a on vm05 2026-03-10T11:26:09.375 INFO:tasks.cephadm:Adding node-exporter.b on vm07 2026-03-10T11:26:09.375 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch apply node-exporter '2;vm05=a;vm07=b' 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:08.768143+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.105:0/4192147907' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:09.310905+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:09.333944+0000 mon.a (mon.0) 548 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:09.334862+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:09.335409+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:09.341163+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:09.601 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:09 vm07 bash[17804]: audit 2026-03-10T11:26:09.344101+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:08.768143+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.105:0/4192147907' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:09.310905+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:09.333944+0000 mon.a (mon.0) 548 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:09.334862+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:09.335409+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:09.341163+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:09 vm05 bash[22470]: audit 2026-03-10T11:26:09.344101+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:08.768143+0000 mon.c (mon.1) 16 : audit [DBG] from='client.? 192.168.123.105:0/4192147907' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:09.310905+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:09.333944+0000 mon.a (mon.0) 548 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:09.334862+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:09.335409+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:09.341163+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:09.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:09 vm05 bash[17453]: audit 2026-03-10T11:26:09.344101+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-10T11:26:09.892 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled node-exporter update... 2026-03-10T11:26:09.952 DEBUG:teuthology.orchestra.run.vm05:node-exporter.a> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.a.service 2026-03-10T11:26:09.953 DEBUG:teuthology.orchestra.run.vm07:node-exporter.b> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.b.service 2026-03-10T11:26:09.954 INFO:tasks.cephadm:Adding alertmanager.a on vm05 2026-03-10T11:26:09.954 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch apply alertmanager '1;vm05=a' 2026-03-10T11:26:10.423 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:10 vm07 bash[18531]: ignoring --setuser ceph since I am not root 2026-03-10T11:26:10.423 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:10 vm07 bash[18531]: ignoring --setgroup ceph since I am not root 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:10 vm07 bash[18531]: debug 2026-03-10T11:26:10.470+0000 7f614d986000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:10 vm07 bash[18531]: debug 2026-03-10T11:26:10.522+0000 7f614d986000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:10 vm07 bash[17804]: cluster 2026-03-10T11:26:09.218208+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v118: 1 pgs: 1 active+recovering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:10 vm07 bash[17804]: audit 2026-03-10T11:26:09.304154+0000 mgr.y (mgr.14152) 136 : audit [DBG] from='client.24290 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:10 vm07 bash[17804]: cephadm 2026-03-10T11:26:09.305039+0000 mgr.y (mgr.14152) 137 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:10 vm07 bash[17804]: audit 2026-03-10T11:26:09.889129+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:10 vm07 bash[17804]: audit 2026-03-10T11:26:10.351118+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T11:26:10.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:10 vm07 bash[17804]: cluster 2026-03-10T11:26:10.351222+0000 mon.a (mon.0) 555 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T11:26:10.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:10 vm05 bash[22470]: cluster 2026-03-10T11:26:09.218208+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v118: 1 pgs: 1 active+recovering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:10 vm05 bash[22470]: audit 2026-03-10T11:26:09.304154+0000 mgr.y (mgr.14152) 136 : audit [DBG] from='client.24290 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:10 vm05 bash[22470]: cephadm 2026-03-10T11:26:09.305039+0000 mgr.y (mgr.14152) 137 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:10 vm05 bash[22470]: audit 2026-03-10T11:26:09.889129+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:10 vm05 bash[22470]: audit 2026-03-10T11:26:10.351118+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:10 vm05 bash[22470]: cluster 2026-03-10T11:26:10.351222+0000 mon.a (mon.0) 555 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:10 vm05 bash[17453]: cluster 2026-03-10T11:26:09.218208+0000 mgr.y (mgr.14152) 135 : cluster [DBG] pgmap v118: 1 pgs: 1 active+recovering; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:10 vm05 bash[17453]: audit 2026-03-10T11:26:09.304154+0000 mgr.y (mgr.14152) 136 : audit [DBG] from='client.24290 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:10 vm05 bash[17453]: cephadm 2026-03-10T11:26:09.305039+0000 mgr.y (mgr.14152) 137 : cephadm [INF] Saving service prometheus spec with placement vm07=a;count:1 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:10 vm05 bash[17453]: audit 2026-03-10T11:26:09.889129+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:10 vm05 bash[17453]: audit 2026-03-10T11:26:10.351118+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.14152 192.168.123.105:0/2384636217' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:10 vm05 bash[17453]: cluster 2026-03-10T11:26:10.351222+0000 mon.a (mon.0) 555 : cluster [DBG] mgrmap e16: y(active, since 2m), standbys: x 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:10 vm05 bash[17722]: ignoring --setuser ceph since I am not root 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:10 vm05 bash[17722]: ignoring --setgroup ceph since I am not root 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:10 vm05 bash[17722]: debug 2026-03-10T11:26:10.388+0000 7f7a9bbee700 1 -- 192.168.123.105:0/1526363691 <== mon.1 v2:192.168.123.105:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x55ade5016340 con 0x55ade511c400 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:10 vm05 bash[17722]: debug 2026-03-10T11:26:10.468+0000 7f7aa464a000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:26:10.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:10 vm05 bash[17722]: debug 2026-03-10T11:26:10.524+0000 7f7aa464a000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:26:11.198 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:10 vm07 bash[18531]: debug 2026-03-10T11:26:10.838+0000 7f614d986000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:26:11.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:10 vm05 bash[17722]: debug 2026-03-10T11:26:10.852+0000 7f7aa464a000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:26:11.671 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:11 vm07 bash[18531]: debug 2026-03-10T11:26:11.358+0000 7f614d986000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:26:11.671 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:11 vm07 bash[18531]: debug 2026-03-10T11:26:11.462+0000 7f614d986000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:26:11.703 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:11 vm05 bash[17722]: debug 2026-03-10T11:26:11.388+0000 7f7aa464a000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:26:11.704 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:11 vm05 bash[17722]: debug 2026-03-10T11:26:11.484+0000 7f7aa464a000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:26:11.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:11 vm07 bash[18531]: debug 2026-03-10T11:26:11.666+0000 7f614d986000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:26:11.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:11 vm07 bash[18531]: debug 2026-03-10T11:26:11.766+0000 7f614d986000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:26:11.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:11 vm07 bash[18531]: debug 2026-03-10T11:26:11.826+0000 7f614d986000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:26:12.016 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:11 vm05 bash[17722]: debug 2026-03-10T11:26:11.696+0000 7f7aa464a000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:26:12.016 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:11 vm05 bash[17722]: debug 2026-03-10T11:26:11.800+0000 7f7aa464a000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:26:12.016 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:11 vm05 bash[17722]: debug 2026-03-10T11:26:11.860+0000 7f7aa464a000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:26:12.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:12 vm05 bash[17722]: debug 2026-03-10T11:26:12.008+0000 7f7aa464a000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:26:12.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:12 vm05 bash[17722]: debug 2026-03-10T11:26:12.072+0000 7f7aa464a000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:26:12.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:12 vm05 bash[17722]: debug 2026-03-10T11:26:12.144+0000 7f7aa464a000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:26:12.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:11 vm07 bash[18531]: debug 2026-03-10T11:26:11.946+0000 7f614d986000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:26:12.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:12 vm07 bash[18531]: debug 2026-03-10T11:26:12.006+0000 7f614d986000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:26:12.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:12 vm07 bash[18531]: debug 2026-03-10T11:26:12.070+0000 7f614d986000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:26:12.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:12 vm07 bash[18531]: debug 2026-03-10T11:26:12.618+0000 7f614d986000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:26:12.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:12 vm07 bash[18531]: debug 2026-03-10T11:26:12.682+0000 7f614d986000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:26:12.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:12 vm07 bash[18531]: debug 2026-03-10T11:26:12.738+0000 7f614d986000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:26:13.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:12 vm05 bash[17722]: debug 2026-03-10T11:26:12.688+0000 7f7aa464a000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:26:13.097 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:12 vm05 bash[17722]: debug 2026-03-10T11:26:12.748+0000 7f7aa464a000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:26:13.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:12 vm05 bash[17722]: debug 2026-03-10T11:26:12.808+0000 7f7aa464a000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:26:13.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.066+0000 7f614d986000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:26:13.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.134+0000 7f614d986000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:26:13.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.198+0000 7f614d986000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:26:13.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.290+0000 7f614d986000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:26:13.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.148+0000 7f7aa464a000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:26:13.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.204+0000 7f7aa464a000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:26:13.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.268+0000 7f7aa464a000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:26:13.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.360+0000 7f7aa464a000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:26:13.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.646+0000 7f614d986000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:26:13.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.826+0000 7f614d986000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:26:13.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.886+0000 7f614d986000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:26:14.045 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.720+0000 7f7aa464a000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:26:14.045 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.904+0000 7f7aa464a000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:26:14.045 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:13 vm05 bash[17722]: debug 2026-03-10T11:26:13.968+0000 7f7aa464a000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:26:14.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:14 vm05 bash[17722]: debug 2026-03-10T11:26:14.036+0000 7f7aa464a000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:26:14.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:14 vm05 bash[17722]: debug 2026-03-10T11:26:14.196+0000 7f7aa464a000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:26:14.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:13 vm07 bash[18531]: debug 2026-03-10T11:26:13.954+0000 7f614d986000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:26:14.448 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: debug 2026-03-10T11:26:14.110+0000 7f614d986000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: debug 2026-03-10T11:26:14.678+0000 7f614d986000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: [10/Mar/2026:11:26:14] ENGINE Bus STARTING 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: CherryPy Checker: 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: The Application mounted at '' has an empty config. 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: [10/Mar/2026:11:26:14] ENGINE Serving on http://:::9283 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:14 vm07 bash[18531]: [10/Mar/2026:11:26:14] ENGINE Bus STARTED 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:14 vm07 bash[17804]: cluster 2026-03-10T11:26:14.683040+0000 mon.a (mon.0) 556 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:14 vm07 bash[17804]: cluster 2026-03-10T11:26:14.683168+0000 mon.a (mon.0) 557 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:14 vm07 bash[17804]: audit 2026-03-10T11:26:14.687041+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:14 vm07 bash[17804]: audit 2026-03-10T11:26:14.688576+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:14 vm07 bash[17804]: audit 2026-03-10T11:26:14.691393+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:26:14.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:14 vm07 bash[17804]: audit 2026-03-10T11:26:14.691788+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:14 vm05 bash[22470]: cluster 2026-03-10T11:26:14.683040+0000 mon.a (mon.0) 556 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:14 vm05 bash[22470]: cluster 2026-03-10T11:26:14.683168+0000 mon.a (mon.0) 557 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:14 vm05 bash[22470]: audit 2026-03-10T11:26:14.687041+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:14 vm05 bash[22470]: audit 2026-03-10T11:26:14.688576+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:14 vm05 bash[22470]: audit 2026-03-10T11:26:14.691393+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:14 vm05 bash[22470]: audit 2026-03-10T11:26:14.691788+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:14 vm05 bash[17453]: cluster 2026-03-10T11:26:14.683040+0000 mon.a (mon.0) 556 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:14 vm05 bash[17453]: cluster 2026-03-10T11:26:14.683168+0000 mon.a (mon.0) 557 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:14 vm05 bash[17453]: audit 2026-03-10T11:26:14.687041+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:14 vm05 bash[17453]: audit 2026-03-10T11:26:14.688576+0000 mon.b (mon.2) 25 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:14 vm05 bash[17453]: audit 2026-03-10T11:26:14.691393+0000 mon.b (mon.2) 26 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:14 vm05 bash[17453]: audit 2026-03-10T11:26:14.691788+0000 mon.b (mon.2) 27 : audit [DBG] from='mgr.? 192.168.123.107:0/3323303465' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:26:15.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:14 vm05 bash[17722]: debug 2026-03-10T11:26:14.768+0000 7f7aa464a000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:15 vm05 bash[17453]: cluster 2026-03-10T11:26:14.742761+0000 mon.a (mon.0) 558 : cluster [DBG] mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:15 vm05 bash[17453]: cluster 2026-03-10T11:26:14.773951+0000 mon.a (mon.0) 559 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:15 vm05 bash[17453]: cluster 2026-03-10T11:26:14.774854+0000 mon.a (mon.0) 560 : cluster [INF] Activating manager daemon y 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:15 vm05 bash[17453]: cluster 2026-03-10T11:26:14.780176+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:15 vm05 bash[17722]: [10/Mar/2026:11:26:15] ENGINE Bus STARTING 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:15 vm05 bash[22470]: cluster 2026-03-10T11:26:14.742761+0000 mon.a (mon.0) 558 : cluster [DBG] mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:15 vm05 bash[22470]: cluster 2026-03-10T11:26:14.773951+0000 mon.a (mon.0) 559 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:15 vm05 bash[22470]: cluster 2026-03-10T11:26:14.774854+0000 mon.a (mon.0) 560 : cluster [INF] Activating manager daemon y 2026-03-10T11:26:16.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:15 vm05 bash[22470]: cluster 2026-03-10T11:26:14.780176+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T11:26:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:15 vm07 bash[17804]: cluster 2026-03-10T11:26:14.742761+0000 mon.a (mon.0) 558 : cluster [DBG] mgrmap e17: y(active, since 2m), standbys: x 2026-03-10T11:26:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:15 vm07 bash[17804]: cluster 2026-03-10T11:26:14.773951+0000 mon.a (mon.0) 559 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:26:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:15 vm07 bash[17804]: cluster 2026-03-10T11:26:14.774854+0000 mon.a (mon.0) 560 : cluster [INF] Activating manager daemon y 2026-03-10T11:26:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:15 vm07 bash[17804]: cluster 2026-03-10T11:26:14.780176+0000 mon.a (mon.0) 561 : cluster [DBG] osdmap e50: 8 total, 8 up, 8 in 2026-03-10T11:26:16.420 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: CherryPy Checker: 2026-03-10T11:26:16.420 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: The Application mounted at '' has an empty config. 2026-03-10T11:26:16.420 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: [10/Mar/2026:11:26:16] ENGINE Serving on http://:::9283 2026-03-10T11:26:16.420 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: [10/Mar/2026:11:26:16] ENGINE Bus STARTED 2026-03-10T11:26:16.420 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: [10/Mar/2026:11:26:16] ENGINE Bus STARTING 2026-03-10T11:26:16.795 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: [10/Mar/2026:11:26:16] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:26:16.795 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:16 vm05 bash[17722]: [10/Mar/2026:11:26:16] ENGINE Bus STARTED 2026-03-10T11:26:16.815 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled alertmanager update... 2026-03-10T11:26:16.876 DEBUG:teuthology.orchestra.run.vm05:alertmanager.a> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@alertmanager.a.service 2026-03-10T11:26:16.877 INFO:tasks.cephadm:Adding grafana.a on vm07 2026-03-10T11:26:16.877 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph orch apply grafana '1;vm07=a' 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: cluster 2026-03-10T11:26:15.766104+0000 mon.a (mon.0) 562 : cluster [DBG] mgrmap e18: y(active, starting, since 0.991322s), standbys: x 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.767496+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.767669+0000 mon.b (mon.2) 29 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.767803+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.769731+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.769796+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.769930+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.769958+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.769979+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.769998+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770017+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770036+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770055+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770076+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770632+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770665+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.770698+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: cluster 2026-03-10T11:26:15.828015+0000 mon.a (mon.0) 563 : cluster [INF] Manager daemon y is now available 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.850917+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.856826+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.858759+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.862461+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.863599+0000 mon.b (mon.2) 47 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.877361+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.878511+0000 mon.b (mon.2) 48 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.915106+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:15.916274+0000 mon.b (mon.2) 49 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:26:17.065 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:16 vm07 bash[17804]: audit 2026-03-10T11:26:16.539422+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: cluster 2026-03-10T11:26:15.766104+0000 mon.a (mon.0) 562 : cluster [DBG] mgrmap e18: y(active, starting, since 0.991322s), standbys: x 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.767496+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.767669+0000 mon.b (mon.2) 29 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.767803+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.769731+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.769796+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.769930+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.769958+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.769979+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.769998+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770017+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770036+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770055+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770076+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770632+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770665+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.770698+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: cluster 2026-03-10T11:26:15.828015+0000 mon.a (mon.0) 563 : cluster [INF] Manager daemon y is now available 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.850917+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.856826+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.858759+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.862461+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.863599+0000 mon.b (mon.2) 47 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.877361+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.878511+0000 mon.b (mon.2) 48 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:26:17.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.915106+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:15.916274+0000 mon.b (mon.2) 49 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:16 vm05 bash[22470]: audit 2026-03-10T11:26:16.539422+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: cluster 2026-03-10T11:26:15.766104+0000 mon.a (mon.0) 562 : cluster [DBG] mgrmap e18: y(active, starting, since 0.991322s), standbys: x 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.767496+0000 mon.b (mon.2) 28 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.767669+0000 mon.b (mon.2) 29 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.767803+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.769731+0000 mon.b (mon.2) 31 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.769796+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.769930+0000 mon.b (mon.2) 33 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.769958+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.769979+0000 mon.b (mon.2) 35 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.769998+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770017+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770036+0000 mon.b (mon.2) 38 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770055+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770076+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770632+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770665+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.770698+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: cluster 2026-03-10T11:26:15.828015+0000 mon.a (mon.0) 563 : cluster [INF] Manager daemon y is now available 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.850917+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.856826+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.858759+0000 mon.b (mon.2) 45 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.862461+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.863599+0000 mon.b (mon.2) 47 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.877361+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.878511+0000 mon.b (mon.2) 48 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.915106+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:15.916274+0000 mon.b (mon.2) 49 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:26:17.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:16 vm05 bash[17453]: audit 2026-03-10T11:26:16.539422+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:17.351 INFO:teuthology.orchestra.run.vm07.stdout:Scheduled grafana update... 2026-03-10T11:26:17.410 DEBUG:teuthology.orchestra.run.vm07:grafana.a> sudo journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@grafana.a.service 2026-03-10T11:26:17.411 INFO:tasks.cephadm:Setting up client nodes... 2026-03-10T11:26:17.411 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T11:26:17.917 INFO:teuthology.orchestra.run.vm05.stdout:[client.0] 2026-03-10T11:26:17.917 INFO:teuthology.orchestra.run.vm05.stdout: key = AQDZ/69pl70SNhAAhgxlPtKZo3WAZgqc92pQ8g== 2026-03-10T11:26:17.974 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-10T11:26:17.974 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-10T11:26:17.974 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-10T11:26:17.988 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cephadm 2026-03-10T11:26:16.417252+0000 mgr.y (mgr.24310) 1 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Bus STARTING 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cephadm 2026-03-10T11:26:16.529459+0000 mgr.y (mgr.24310) 2 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cephadm 2026-03-10T11:26:16.529704+0000 mgr.y (mgr.24310) 3 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Bus STARTED 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cluster 2026-03-10T11:26:16.779628+0000 mon.a (mon.0) 568 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: audit 2026-03-10T11:26:16.786606+0000 mgr.y (mgr.24310) 4 : audit [DBG] from='client.24302 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cephadm 2026-03-10T11:26:16.790837+0000 mgr.y (mgr.24310) 5 : cephadm [INF] Saving service alertmanager spec with placement vm05=a;count:1 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cluster 2026-03-10T11:26:16.798946+0000 mgr.y (mgr.24310) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: audit 2026-03-10T11:26:16.810834+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: audit 2026-03-10T11:26:17.340240+0000 mgr.y (mgr.24310) 7 : audit [DBG] from='client.24329 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: cephadm 2026-03-10T11:26:17.341472+0000 mgr.y (mgr.24310) 8 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T11:26:18.093 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:17 vm07 bash[17804]: audit 2026-03-10T11:26:17.345870+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cephadm 2026-03-10T11:26:16.417252+0000 mgr.y (mgr.24310) 1 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Bus STARTING 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cephadm 2026-03-10T11:26:16.529459+0000 mgr.y (mgr.24310) 2 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cephadm 2026-03-10T11:26:16.529704+0000 mgr.y (mgr.24310) 3 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Bus STARTED 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cluster 2026-03-10T11:26:16.779628+0000 mon.a (mon.0) 568 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: audit 2026-03-10T11:26:16.786606+0000 mgr.y (mgr.24310) 4 : audit [DBG] from='client.24302 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cephadm 2026-03-10T11:26:16.790837+0000 mgr.y (mgr.24310) 5 : cephadm [INF] Saving service alertmanager spec with placement vm05=a;count:1 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cluster 2026-03-10T11:26:16.798946+0000 mgr.y (mgr.24310) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: audit 2026-03-10T11:26:16.810834+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: audit 2026-03-10T11:26:17.340240+0000 mgr.y (mgr.24310) 7 : audit [DBG] from='client.24329 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: cephadm 2026-03-10T11:26:17.341472+0000 mgr.y (mgr.24310) 8 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:17 vm05 bash[22470]: audit 2026-03-10T11:26:17.345870+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cephadm 2026-03-10T11:26:16.417252+0000 mgr.y (mgr.24310) 1 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Bus STARTING 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cephadm 2026-03-10T11:26:16.529459+0000 mgr.y (mgr.24310) 2 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cephadm 2026-03-10T11:26:16.529704+0000 mgr.y (mgr.24310) 3 : cephadm [INF] [10/Mar/2026:11:26:16] ENGINE Bus STARTED 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cluster 2026-03-10T11:26:16.779628+0000 mon.a (mon.0) 568 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: audit 2026-03-10T11:26:16.786606+0000 mgr.y (mgr.24310) 4 : audit [DBG] from='client.24302 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm05=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cephadm 2026-03-10T11:26:16.790837+0000 mgr.y (mgr.24310) 5 : cephadm [INF] Saving service alertmanager spec with placement vm05=a;count:1 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cluster 2026-03-10T11:26:16.798946+0000 mgr.y (mgr.24310) 6 : cluster [DBG] pgmap v3: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: audit 2026-03-10T11:26:16.810834+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: audit 2026-03-10T11:26:17.340240+0000 mgr.y (mgr.24310) 7 : audit [DBG] from='client.24329 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm07=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: cephadm 2026-03-10T11:26:17.341472+0000 mgr.y (mgr.24310) 8 : cephadm [INF] Saving service grafana spec with placement vm07=a;count:1 2026-03-10T11:26:18.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:17 vm05 bash[17453]: audit 2026-03-10T11:26:17.345870+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:18.465 INFO:teuthology.orchestra.run.vm07.stdout:[client.1] 2026-03-10T11:26:18.465 INFO:teuthology.orchestra.run.vm07.stdout: key = AQDa/69pV2c1GxAASFpt4jSkH0YYeeUFDscf+w== 2026-03-10T11:26:18.525 DEBUG:teuthology.orchestra.run.vm07:> set -ex 2026-03-10T11:26:18.525 DEBUG:teuthology.orchestra.run.vm07:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-10T11:26:18.525 DEBUG:teuthology.orchestra.run.vm07:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-10T11:26:18.540 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-10T11:26:18.540 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-10T11:26:18.540 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph mgr dump --format=json 2026-03-10T11:26:19.079 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: cluster 2026-03-10T11:26:17.769167+0000 mgr.y (mgr.24310) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: cluster 2026-03-10T11:26:17.793404+0000 mon.a (mon.0) 571 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: audit 2026-03-10T11:26:17.906690+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.105:0/2147277654' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: audit 2026-03-10T11:26:17.907088+0000 mon.a (mon.0) 572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: audit 2026-03-10T11:26:17.912002+0000 mon.a (mon.0) 573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: audit 2026-03-10T11:26:18.456381+0000 mon.a (mon.0) 574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: audit 2026-03-10T11:26:18.457558+0000 mon.b (mon.2) 50 : audit [INF] from='client.? 192.168.123.107:0/2182739738' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:18 vm05 bash[17453]: audit 2026-03-10T11:26:18.461125+0000 mon.a (mon.0) 575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: cluster 2026-03-10T11:26:17.769167+0000 mgr.y (mgr.24310) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: cluster 2026-03-10T11:26:17.793404+0000 mon.a (mon.0) 571 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: audit 2026-03-10T11:26:17.906690+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.105:0/2147277654' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: audit 2026-03-10T11:26:17.907088+0000 mon.a (mon.0) 572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: audit 2026-03-10T11:26:17.912002+0000 mon.a (mon.0) 573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: audit 2026-03-10T11:26:18.456381+0000 mon.a (mon.0) 574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: audit 2026-03-10T11:26:18.457558+0000 mon.b (mon.2) 50 : audit [INF] from='client.? 192.168.123.107:0/2182739738' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.080 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:18 vm05 bash[22470]: audit 2026-03-10T11:26:18.461125+0000 mon.a (mon.0) 575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: cluster 2026-03-10T11:26:17.769167+0000 mgr.y (mgr.24310) 9 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: cluster 2026-03-10T11:26:17.793404+0000 mon.a (mon.0) 571 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: audit 2026-03-10T11:26:17.906690+0000 mon.c (mon.1) 17 : audit [INF] from='client.? 192.168.123.105:0/2147277654' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: audit 2026-03-10T11:26:17.907088+0000 mon.a (mon.0) 572 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: audit 2026-03-10T11:26:17.912002+0000 mon.a (mon.0) 573 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: audit 2026-03-10T11:26:18.456381+0000 mon.a (mon.0) 574 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: audit 2026-03-10T11:26:18.457558+0000 mon.b (mon.2) 50 : audit [INF] from='client.? 192.168.123.107:0/2182739738' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-10T11:26:19.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:18 vm07 bash[17804]: audit 2026-03-10T11:26:18.461125+0000 mon.a (mon.0) 575 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-10T11:26:20.243 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.243 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.069246+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.243 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.284370+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.243 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.370418+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.243 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.375871+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.376941+0000 mgr.y (mgr.24310) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.377046+0000 mon.b (mon.2) 51 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.512156+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.563612+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.566728+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.567593+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.568083+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.569417+0000 mon.b (mon.2) 53 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.569428+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.570595+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.570775+0000 mon.b (mon.2) 54 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.571949+0000 mon.b (mon.2) 55 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.722573+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[17453]: audit 2026-03-10T11:26:19.730109+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.069246+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.284370+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.370418+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.375871+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.376941+0000 mgr.y (mgr.24310) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.377046+0000 mon.b (mon.2) 51 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.512156+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.563612+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.566728+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.567593+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.568083+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.569417+0000 mon.b (mon.2) 53 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.569428+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.570595+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.570775+0000 mon.b (mon.2) 54 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.571949+0000 mon.b (mon.2) 55 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.722573+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 bash[22470]: audit 2026-03-10T11:26:19.730109+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.244 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.244 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.244 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.244 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.245 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.245 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.069246+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.284370+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.370418+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.375871+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.340 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.376941+0000 mgr.y (mgr.24310) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.377046+0000 mon.b (mon.2) 51 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.512156+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.563612+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.566728+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.567593+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.568083+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.569417+0000 mon.b (mon.2) 53 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.569428+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.570595+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.570775+0000 mon.b (mon.2) 54 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.571949+0000 mon.b (mon.2) 55 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.722573+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.341 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[17804]: audit 2026-03-10T11:26:19.730109+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:20.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.598 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: Started Ceph node-exporter.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:20.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:20 vm05 bash[37233]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-10T11:26:20.598 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.598 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:26:20 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.861 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:20.901 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: Started Ceph node-exporter.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:20.901 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:26:20 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:21.198 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:20 vm07 bash[32761]: Unable to find image 'quay.io/prometheus/node-exporter:v1.3.1' locally 2026-03-10T11:26:21.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:21 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:21.297 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:21.364 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":20,"active_gid":24310,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1590912030},{"type":"v1","addr":"192.168.123.105:6801","nonce":1590912030}]},"active_addr":"192.168.123.105:6801/1590912030","active_change":"2026-03-10T11:26:14.774775+0000","active_mgr_features":4540138303579357183,"available":true,"standbys":[{"gid":24308,"name":"x","mgr_features":4540138303579357183,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2400","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"7","min":"0","max":"7","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 or 7 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","upmap"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.23.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/ceph-grafana:8.3.5","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"docker.io/library/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"docker.io/arcts/keepalived","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.3.1","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.33.4","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"docker.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"noautoscale":{"name":"noautoscale","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"global autoscale flag","long_desc":"Option to turn on/off the autoscaler for all pools","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"drive_group_interval":{"name":"drive_group_interval","type":"float","level":"advanced","flags":0,"default_value":"300.0","min":"","max":"","enum_allowed":[],"desc":"interval in seconds between re-application of applied drive_groups","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.105:8443/","prometheus":"http://192.168.123.105:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"last_failure_osd_epoch":50,"active_clients":[{"addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":1703505188}]},{"addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":2576592578}]},{"addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":2551822194}]},{"addrvec":[{"type":"v2","addr":"192.168.123.105:0","nonce":4225849513}]}]}} 2026-03-10T11:26:21.366 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-10T11:26:21.366 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-10T11:26:21.366 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd dump --format=json 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.439729+0000 mgr.y (mgr.24310) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.571393+0000 mgr.y (mgr.24310) 12 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.572494+0000 mgr.y (mgr.24310) 13 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.572556+0000 mgr.y (mgr.24310) 14 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.641503+0000 mgr.y (mgr.24310) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:19.731584+0000 mgr.y (mgr.24310) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm05 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cluster 2026-03-10T11:26:19.769469+0000 mgr.y (mgr.24310) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: audit 2026-03-10T11:26:20.329005+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: cephadm 2026-03-10T11:26:20.331342+0000 mgr.y (mgr.24310) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: audit 2026-03-10T11:26:20.924662+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[17453]: audit 2026-03-10T11:26:21.292008+0000 mon.c (mon.1) 18 : audit [DBG] from='client.? 192.168.123.105:0/3478093217' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.439729+0000 mgr.y (mgr.24310) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.571393+0000 mgr.y (mgr.24310) 12 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.572494+0000 mgr.y (mgr.24310) 13 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.572556+0000 mgr.y (mgr.24310) 14 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.641503+0000 mgr.y (mgr.24310) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:19.731584+0000 mgr.y (mgr.24310) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm05 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cluster 2026-03-10T11:26:19.769469+0000 mgr.y (mgr.24310) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: audit 2026-03-10T11:26:20.329005+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: cephadm 2026-03-10T11:26:20.331342+0000 mgr.y (mgr.24310) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: audit 2026-03-10T11:26:20.924662+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:21.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:21 vm05 bash[22470]: audit 2026-03-10T11:26:21.292008+0000 mon.c (mon.1) 18 : audit [DBG] from='client.? 192.168.123.105:0/3478093217' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.439729+0000 mgr.y (mgr.24310) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.571393+0000 mgr.y (mgr.24310) 12 : cephadm [INF] Adjusting osd_memory_target on vm07 to 113.9M 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.572494+0000 mgr.y (mgr.24310) 13 : cephadm [WRN] Unable to set osd_memory_target on vm07 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.572556+0000 mgr.y (mgr.24310) 14 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.641503+0000 mgr.y (mgr.24310) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:19.731584+0000 mgr.y (mgr.24310) 16 : cephadm [INF] Deploying daemon node-exporter.a on vm05 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cluster 2026-03-10T11:26:19.769469+0000 mgr.y (mgr.24310) 17 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: audit 2026-03-10T11:26:20.329005+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: cephadm 2026-03-10T11:26:20.331342+0000 mgr.y (mgr.24310) 18 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: audit 2026-03-10T11:26:20.924662+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:21.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:21 vm07 bash[17804]: audit 2026-03-10T11:26:21.292008+0000 mon.c (mon.1) 18 : audit [DBG] from='client.? 192.168.123.105:0/3478093217' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:26:22.096 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:21 vm05 bash[37233]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-10T11:26:22.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:22 vm05 bash[22470]: cephadm 2026-03-10T11:26:20.939560+0000 mgr.y (mgr.24310) 19 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T11:26:22.348 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: aa2a8d90b84c: Pulling fs layer 2026-03-10T11:26:22.348 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b45d31ee2d7f: Pulling fs layer 2026-03-10T11:26:22.348 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b5db1e299295: Pulling fs layer 2026-03-10T11:26:22.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[17453]: cephadm 2026-03-10T11:26:20.939560+0000 mgr.y (mgr.24310) 19 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T11:26:22.651 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:22 vm07 bash[17804]: cephadm 2026-03-10T11:26:20.939560+0000 mgr.y (mgr.24310) 19 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T11:26:22.652 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:22 vm07 bash[32761]: v1.3.1: Pulling from prometheus/node-exporter 2026-03-10T11:26:22.948 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:22 vm07 bash[32761]: aa2a8d90b84c: Pulling fs layer 2026-03-10T11:26:22.948 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:22 vm07 bash[32761]: b45d31ee2d7f: Pulling fs layer 2026-03-10T11:26:22.948 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:22 vm07 bash[32761]: b5db1e299295: Pulling fs layer 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b45d31ee2d7f: Download complete 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: aa2a8d90b84c: Verifying Checksum 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: aa2a8d90b84c: Download complete 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: aa2a8d90b84c: Pull complete 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b5db1e299295: Verifying Checksum 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b5db1e299295: Download complete 2026-03-10T11:26:22.964 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b45d31ee2d7f: Pull complete 2026-03-10T11:26:23.238 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: b5db1e299295: Pull complete 2026-03-10T11:26:23.239 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-10T11:26:23.239 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:22 vm05 bash[37233]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-10T11:26:23.413 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[17804]: cluster 2026-03-10T11:26:21.769754+0000 mgr.y (mgr.24310) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:23.413 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: b45d31ee2d7f: Verifying Checksum 2026-03-10T11:26:23.413 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: b45d31ee2d7f: Download complete 2026-03-10T11:26:23.413 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: aa2a8d90b84c: Verifying Checksum 2026-03-10T11:26:23.414 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: aa2a8d90b84c: Download complete 2026-03-10T11:26:23.414 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: aa2a8d90b84c: Pull complete 2026-03-10T11:26:23.414 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: b5db1e299295: Verifying Checksum 2026-03-10T11:26:23.414 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: b5db1e299295: Download complete 2026-03-10T11:26:23.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:23 vm05 bash[22470]: cluster 2026-03-10T11:26:21.769754+0000 mgr.y (mgr.24310) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:23.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[17453]: cluster 2026-03-10T11:26:21.769754+0000 mgr.y (mgr.24310) 20 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:23.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.235Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-10T11:26:23.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.235Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-10T11:26:23.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-10T11:26:23.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T11:26:23.598 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=node_exporter.go:115 level=info collector=arp 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.237Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=edac 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=os 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=stat 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.238Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:115 level=info collector=time 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:115 level=info collector=uname 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-10T11:26:23.599 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:23 vm05 bash[37233]: ts=2026-03-10T11:26:23.239Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-10T11:26:23.698 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: b45d31ee2d7f: Pull complete 2026-03-10T11:26:23.698 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: b5db1e299295: Pull complete 2026-03-10T11:26:23.698 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: Digest: sha256:f2269e73124dd0f60a7d19a2ce1264d33d08a985aed0ee6b0b89d0be470592cd 2026-03-10T11:26:23.698 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.3.1 2026-03-10T11:26:23.698 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.641Z caller=node_exporter.go:182 level=info msg="Starting node_exporter" version="(version=1.3.1, branch=HEAD, revision=a2321e7b940ddcff26873612bccdf7cd4c42b6b6)" 2026-03-10T11:26:23.698 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.641Z caller=node_exporter.go:183 level=info msg="Build context" build_context="(go=go1.17.3, user=root@243aafa5525c, date=20211205-11:09:49)" 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+)($|/) 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:108 level=info msg="Enabled collectors" 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=arp 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=bcache 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=bonding 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=btrfs 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=conntrack 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=cpu 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=cpufreq 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=diskstats 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=dmi 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=edac 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=entropy 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=fibrechannel 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=filefd 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=filesystem 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=hwmon 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=infiniband 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=ipvs 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=loadavg 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=mdadm 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.642Z caller=node_exporter.go:115 level=info collector=meminfo 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=netclass 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=netdev 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=netstat 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=nfs 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=nfsd 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=nvme 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=os 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=powersupplyclass 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=pressure 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=rapl 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=schedstat 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=sockstat 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=softnet 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=stat 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=tapestats 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=textfile 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=thermal_zone 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=time 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=udp_queues 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=uname 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=vmstat 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=xfs 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:115 level=info collector=zfs 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=node_exporter.go:199 level=info msg="Listening on" address=:9100 2026-03-10T11:26:23.699 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:23 vm07 bash[32761]: ts=2026-03-10T11:26:23.643Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false 2026-03-10T11:26:24.989 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:25.359 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:25.359 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":50,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","created":"2026-03-10T11:23:06.356940+0000","modified":"2026-03-10T11:26:14.773975+0000","last_up_change":"2026-03-10T11:26:02.324227+0000","last_in_change":"2026-03-10T11:25:49.543158+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T11:24:45.406653+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"0992e6dc-d298-462b-bccd-b74959342712","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6803","nonce":2004210335}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6805","nonce":2004210335}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6809","nonce":2004210335}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6807","nonce":2004210335}]},"public_addr":"192.168.123.105:6803/2004210335","cluster_addr":"192.168.123.105:6805/2004210335","heartbeat_back_addr":"192.168.123.105:6809/2004210335","heartbeat_front_addr":"192.168.123.105:6807/2004210335","state":["exists","up"]},{"osd":1,"uuid":"9cbc5424-3289-45dc-8763-da809c9c9e84","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":30,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6811","nonce":1089345282}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6813","nonce":1089345282}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6817","nonce":1089345282}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6815","nonce":1089345282}]},"public_addr":"192.168.123.105:6811/1089345282","cluster_addr":"192.168.123.105:6813/1089345282","heartbeat_back_addr":"192.168.123.105:6817/1089345282","heartbeat_front_addr":"192.168.123.105:6815/1089345282","state":["exists","up"]},{"osd":2,"uuid":"58079681-6944-4372-ab7d-0aa5717818bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6819","nonce":420660061}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6821","nonce":420660061}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6825","nonce":420660061}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6823","nonce":420660061}]},"public_addr":"192.168.123.105:6819/420660061","cluster_addr":"192.168.123.105:6821/420660061","heartbeat_back_addr":"192.168.123.105:6825/420660061","heartbeat_front_addr":"192.168.123.105:6823/420660061","state":["exists","up"]},{"osd":3,"uuid":"0e62b553-78b1-4fbe-870e-d68c1967e6be","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6827","nonce":311748923}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6829","nonce":311748923}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6832","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6833","nonce":311748923}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6831","nonce":311748923}]},"public_addr":"192.168.123.105:6827/311748923","cluster_addr":"192.168.123.105:6829/311748923","heartbeat_back_addr":"192.168.123.105:6833/311748923","heartbeat_front_addr":"192.168.123.105:6831/311748923","state":["exists","up"]},{"osd":4,"uuid":"5d2d7aab-4d36-465e-b574-aaa4de107693","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":29,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6801","nonce":774944665}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6803","nonce":774944665}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6807","nonce":774944665}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6805","nonce":774944665}]},"public_addr":"192.168.123.107:6801/774944665","cluster_addr":"192.168.123.107:6803/774944665","heartbeat_back_addr":"192.168.123.107:6807/774944665","heartbeat_front_addr":"192.168.123.107:6805/774944665","state":["exists","up"]},{"osd":5,"uuid":"dcefdca8-8af9-4aeb-9472-1fb1d076fa1e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":35,"up_thru":36,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6809","nonce":1013528300}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6811","nonce":1013528300}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6815","nonce":1013528300}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6813","nonce":1013528300}]},"public_addr":"192.168.123.107:6809/1013528300","cluster_addr":"192.168.123.107:6811/1013528300","heartbeat_back_addr":"192.168.123.107:6815/1013528300","heartbeat_front_addr":"192.168.123.107:6813/1013528300","state":["exists","up"]},{"osd":6,"uuid":"783416c9-d1a2-4d8f-91e5-b6343f3a3d0a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6817","nonce":319224116}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6819","nonce":319224116}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6823","nonce":319224116}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6821","nonce":319224116}]},"public_addr":"192.168.123.107:6817/319224116","cluster_addr":"192.168.123.107:6819/319224116","heartbeat_back_addr":"192.168.123.107:6823/319224116","heartbeat_front_addr":"192.168.123.107:6821/319224116","state":["exists","up"]},{"osd":7,"uuid":"d3a17b00-d9f4-4951-b587-40f724c9827b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6825","nonce":3044827210}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6827","nonce":3044827210}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6831","nonce":3044827210}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6829","nonce":3044827210}]},"public_addr":"192.168.123.107:6825/3044827210","cluster_addr":"192.168.123.107:6827/3044827210","heartbeat_back_addr":"192.168.123.107:6831/3044827210","heartbeat_front_addr":"192.168.123.107:6829/3044827210","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:12.150542+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:26.315392+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:42.990393+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:59.190256+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:13.846670+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:29.395250+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:44.979506+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:26:01.088439+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/3163341454":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/467921525":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/2288453217":"2026-03-11T11:26:14.773959+0000","192.168.123.105:6801/2589338318":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/1118298400":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/1312851658":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6801/1110057132":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6800/2589338318":"2026-03-11T11:26:14.773959+0000","192.168.123.105:6800/1110057132":"2026-03-11T11:23:31.132317+0000","192.168.123.105:0/3902952517":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/3473116901":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6801/1953728704":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/4010853674":"2026-03-11T11:23:31.132317+0000","192.168.123.105:0/2723537270":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/3538663775":"2026-03-11T11:23:20.465744+0000","192.168.123.105:6800/1953728704":"2026-03-11T11:23:20.465744+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T11:26:25.415 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-10T11:26:25.415 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd dump --format=json 2026-03-10T11:26:25.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:25 vm07 bash[17804]: cluster 2026-03-10T11:26:23.770021+0000 mgr.y (mgr.24310) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:25.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:25 vm05 bash[17453]: cluster 2026-03-10T11:26:23.770021+0000 mgr.y (mgr.24310) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:25.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:25 vm05 bash[22470]: cluster 2026-03-10T11:26:23.770021+0000 mgr.y (mgr.24310) 21 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:26 vm07 bash[17804]: audit 2026-03-10T11:26:25.356077+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.105:0/3849802479' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:26.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:26 vm07 bash[17804]: audit 2026-03-10T11:26:25.874701+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:26.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:26 vm05 bash[22470]: audit 2026-03-10T11:26:25.356077+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.105:0/3849802479' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:26.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:26 vm05 bash[22470]: audit 2026-03-10T11:26:25.874701+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:26.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:26 vm05 bash[17453]: audit 2026-03-10T11:26:25.356077+0000 mon.c (mon.1) 19 : audit [DBG] from='client.? 192.168.123.105:0/3849802479' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:26.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:26 vm05 bash[17453]: audit 2026-03-10T11:26:25.874701+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:27.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:27 vm07 bash[17804]: cluster 2026-03-10T11:26:25.770268+0000 mgr.y (mgr.24310) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:27.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:27 vm05 bash[22470]: cluster 2026-03-10T11:26:25.770268+0000 mgr.y (mgr.24310) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:27.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:27 vm05 bash[17453]: cluster 2026-03-10T11:26:25.770268+0000 mgr.y (mgr.24310) 22 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:28.034 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:28.380 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:28.380 INFO:teuthology.orchestra.run.vm05.stdout:{"epoch":50,"fsid":"72041074-1c73-11f1-8607-4fca9a5e0a4d","created":"2026-03-10T11:23:06.356940+0000","modified":"2026-03-10T11:26:14.773975+0000","last_up_change":"2026-03-10T11:26:02.324227+0000","last_in_change":"2026-03-10T11:25:49.543158+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"quincy","pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-10T11:24:45.406653+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}}}],"osds":[{"osd":0,"uuid":"0992e6dc-d298-462b-bccd-b74959342712","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":47,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6803","nonce":2004210335}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6805","nonce":2004210335}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6809","nonce":2004210335}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":2004210335},{"type":"v1","addr":"192.168.123.105:6807","nonce":2004210335}]},"public_addr":"192.168.123.105:6803/2004210335","cluster_addr":"192.168.123.105:6805/2004210335","heartbeat_back_addr":"192.168.123.105:6809/2004210335","heartbeat_front_addr":"192.168.123.105:6807/2004210335","state":["exists","up"]},{"osd":1,"uuid":"9cbc5424-3289-45dc-8763-da809c9c9e84","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":30,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6811","nonce":1089345282}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6813","nonce":1089345282}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6816","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6817","nonce":1089345282}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":1089345282},{"type":"v1","addr":"192.168.123.105:6815","nonce":1089345282}]},"public_addr":"192.168.123.105:6811/1089345282","cluster_addr":"192.168.123.105:6813/1089345282","heartbeat_back_addr":"192.168.123.105:6817/1089345282","heartbeat_front_addr":"192.168.123.105:6815/1089345282","state":["exists","up"]},{"osd":2,"uuid":"58079681-6944-4372-ab7d-0aa5717818bf","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6818","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6819","nonce":420660061}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6820","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6821","nonce":420660061}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6824","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6825","nonce":420660061}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6822","nonce":420660061},{"type":"v1","addr":"192.168.123.105:6823","nonce":420660061}]},"public_addr":"192.168.123.105:6819/420660061","cluster_addr":"192.168.123.105:6821/420660061","heartbeat_back_addr":"192.168.123.105:6825/420660061","heartbeat_front_addr":"192.168.123.105:6823/420660061","state":["exists","up"]},{"osd":3,"uuid":"0e62b553-78b1-4fbe-870e-d68c1967e6be","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":24,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6826","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6827","nonce":311748923}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6828","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6829","nonce":311748923}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6832","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6833","nonce":311748923}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6830","nonce":311748923},{"type":"v1","addr":"192.168.123.105:6831","nonce":311748923}]},"public_addr":"192.168.123.105:6827/311748923","cluster_addr":"192.168.123.105:6829/311748923","heartbeat_back_addr":"192.168.123.105:6833/311748923","heartbeat_front_addr":"192.168.123.105:6831/311748923","state":["exists","up"]},{"osd":4,"uuid":"5d2d7aab-4d36-465e-b574-aaa4de107693","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":29,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6800","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6801","nonce":774944665}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6802","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6803","nonce":774944665}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6806","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6807","nonce":774944665}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6804","nonce":774944665},{"type":"v1","addr":"192.168.123.107:6805","nonce":774944665}]},"public_addr":"192.168.123.107:6801/774944665","cluster_addr":"192.168.123.107:6803/774944665","heartbeat_back_addr":"192.168.123.107:6807/774944665","heartbeat_front_addr":"192.168.123.107:6805/774944665","state":["exists","up"]},{"osd":5,"uuid":"dcefdca8-8af9-4aeb-9472-1fb1d076fa1e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":35,"up_thru":36,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6808","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6809","nonce":1013528300}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6810","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6811","nonce":1013528300}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6814","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6815","nonce":1013528300}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6812","nonce":1013528300},{"type":"v1","addr":"192.168.123.107:6813","nonce":1013528300}]},"public_addr":"192.168.123.107:6809/1013528300","cluster_addr":"192.168.123.107:6811/1013528300","heartbeat_back_addr":"192.168.123.107:6815/1013528300","heartbeat_front_addr":"192.168.123.107:6813/1013528300","state":["exists","up"]},{"osd":6,"uuid":"783416c9-d1a2-4d8f-91e5-b6343f3a3d0a","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":41,"up_thru":42,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6816","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6817","nonce":319224116}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6818","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6819","nonce":319224116}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6822","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6823","nonce":319224116}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6820","nonce":319224116},{"type":"v1","addr":"192.168.123.107:6821","nonce":319224116}]},"public_addr":"192.168.123.107:6817/319224116","cluster_addr":"192.168.123.107:6819/319224116","heartbeat_back_addr":"192.168.123.107:6823/319224116","heartbeat_front_addr":"192.168.123.107:6821/319224116","state":["exists","up"]},{"osd":7,"uuid":"d3a17b00-d9f4-4951-b587-40f724c9827b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":47,"up_thru":48,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6824","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6825","nonce":3044827210}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6826","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6827","nonce":3044827210}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6830","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6831","nonce":3044827210}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.107:6828","nonce":3044827210},{"type":"v1","addr":"192.168.123.107:6829","nonce":3044827210}]},"public_addr":"192.168.123.107:6825/3044827210","cluster_addr":"192.168.123.107:6827/3044827210","heartbeat_back_addr":"192.168.123.107:6831/3044827210","heartbeat_front_addr":"192.168.123.107:6829/3044827210","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:12.150542+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:26.315392+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:42.990393+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:24:59.190256+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:13.846670+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:29.395250+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:25:44.979506+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540138303579357183,"old_weight":0,"last_purged_snaps_scrub":"2026-03-10T11:26:01.088439+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.105:0/3163341454":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/467921525":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/2288453217":"2026-03-11T11:26:14.773959+0000","192.168.123.105:6801/2589338318":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/1118298400":"2026-03-11T11:26:14.773959+0000","192.168.123.105:0/1312851658":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6801/1110057132":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6800/2589338318":"2026-03-11T11:26:14.773959+0000","192.168.123.105:6800/1110057132":"2026-03-11T11:23:31.132317+0000","192.168.123.105:0/3902952517":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/3473116901":"2026-03-11T11:23:31.132317+0000","192.168.123.105:6801/1953728704":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/4010853674":"2026-03-11T11:23:31.132317+0000","192.168.123.105:0/2723537270":"2026-03-11T11:23:20.465744+0000","192.168.123.105:0/3538663775":"2026-03-11T11:23:20.465744+0000","192.168.123.105:6800/1953728704":"2026-03-11T11:23:20.465744+0000"},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-10T11:26:28.432 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.0 flush_pg_stats 2026-03-10T11:26:28.432 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.1 flush_pg_stats 2026-03-10T11:26:28.432 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.2 flush_pg_stats 2026-03-10T11:26:28.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.3 flush_pg_stats 2026-03-10T11:26:28.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.4 flush_pg_stats 2026-03-10T11:26:28.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.5 flush_pg_stats 2026-03-10T11:26:28.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.6 flush_pg_stats 2026-03-10T11:26:28.433 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph tell osd.7 flush_pg_stats 2026-03-10T11:26:29.590 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:29 vm07 bash[17804]: cluster 2026-03-10T11:26:27.770568+0000 mgr.y (mgr.24310) 23 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:29.590 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:29 vm07 bash[17804]: audit 2026-03-10T11:26:28.376692+0000 mon.a (mon.0) 591 : audit [DBG] from='client.? 192.168.123.105:0/1996800494' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:29.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:29 vm05 bash[22470]: cluster 2026-03-10T11:26:27.770568+0000 mgr.y (mgr.24310) 23 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:29.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:29 vm05 bash[22470]: audit 2026-03-10T11:26:28.376692+0000 mon.a (mon.0) 591 : audit [DBG] from='client.? 192.168.123.105:0/1996800494' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:29.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:29 vm05 bash[17453]: cluster 2026-03-10T11:26:27.770568+0000 mgr.y (mgr.24310) 23 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:29.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:29 vm05 bash[17453]: audit 2026-03-10T11:26:28.376692+0000 mon.a (mon.0) 591 : audit [DBG] from='client.? 192.168.123.105:0/1996800494' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-10T11:26:29.855 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.855 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.856 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.856 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.856 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.856 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.856 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:29.856 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 systemd[1]: Started Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.990Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.990Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.990Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.990Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.990Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.990Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.992Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.992Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.994Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.994Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.573µs 2026-03-10T11:26:30.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.994Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.996Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.997Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.997Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=29.145µs wal_replay_duration=2.411071ms total_replay_duration=2.497242ms 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.997Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.997Z caller=main.go:947 level=info msg="TSDB started" 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:29 vm07 bash[33148]: ts=2026-03-10T11:26:29.997Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:30 vm07 bash[33148]: ts=2026-03-10T11:26:30.011Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=13.298959ms db_storage=581ns remote_storage=1.383µs web_handler=271ns query_engine=650ns scrape=1.184445ms scrape_sd=30.086µs notify=681ns notify_sd=1.793µs rules=11.770069ms 2026-03-10T11:26:30.199 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:30 vm07 bash[33148]: ts=2026-03-10T11:26:30.011Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-10T11:26:30.348 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:31.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:30 vm07 bash[17804]: cluster 2026-03-10T11:26:29.770879+0000 mgr.y (mgr.24310) 24 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:31.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:30 vm07 bash[17804]: audit 2026-03-10T11:26:29.883969+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:31.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:30 vm07 bash[17804]: cephadm 2026-03-10T11:26:29.889429+0000 mgr.y (mgr.24310) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm05 2026-03-10T11:26:31.331 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.331 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.333 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.335 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.342 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.343 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.343 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.344 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:31.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:30 vm05 bash[17453]: cluster 2026-03-10T11:26:29.770879+0000 mgr.y (mgr.24310) 24 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:31.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:30 vm05 bash[17453]: audit 2026-03-10T11:26:29.883969+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:31.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:30 vm05 bash[17453]: cephadm 2026-03-10T11:26:29.889429+0000 mgr.y (mgr.24310) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm05 2026-03-10T11:26:31.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:30 vm05 bash[22470]: cluster 2026-03-10T11:26:29.770879+0000 mgr.y (mgr.24310) 24 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:31.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:30 vm05 bash[22470]: audit 2026-03-10T11:26:29.883969+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:31.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:30 vm05 bash[22470]: cephadm 2026-03-10T11:26:29.889429+0000 mgr.y (mgr.24310) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm05 2026-03-10T11:26:32.160 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:31 vm05 bash[22470]: audit 2026-03-10T11:26:30.883525+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:32.160 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:31 vm05 bash[17453]: audit 2026-03-10T11:26:30.883525+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:32.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:31 vm07 bash[17804]: audit 2026-03-10T11:26:30.883525+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:32.255 INFO:teuthology.orchestra.run.vm05.stdout:77309411351 2026-03-10T11:26:32.255 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.2 2026-03-10T11:26:32.349 INFO:teuthology.orchestra.run.vm05.stdout:34359738398 2026-03-10T11:26:32.350 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.0 2026-03-10T11:26:32.746 INFO:teuthology.orchestra.run.vm05.stdout:103079215123 2026-03-10T11:26:32.746 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.3 2026-03-10T11:26:32.755 INFO:teuthology.orchestra.run.vm05.stdout:55834574874 2026-03-10T11:26:32.755 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.1 2026-03-10T11:26:32.769 INFO:teuthology.orchestra.run.vm05.stdout:124554051600 2026-03-10T11:26:32.769 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.4 2026-03-10T11:26:32.865 INFO:teuthology.orchestra.run.vm05.stdout:176093659146 2026-03-10T11:26:32.865 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.6 2026-03-10T11:26:32.890 INFO:teuthology.orchestra.run.vm05.stdout:150323855373 2026-03-10T11:26:32.891 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.5 2026-03-10T11:26:32.987 INFO:teuthology.orchestra.run.vm05.stdout:201863462919 2026-03-10T11:26:32.987 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph osd last-stat-seq osd.7 2026-03-10T11:26:33.609 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:33 vm05 bash[22470]: cluster 2026-03-10T11:26:31.771179+0000 mgr.y (mgr.24310) 26 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:33.609 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:33 vm05 bash[17453]: cluster 2026-03-10T11:26:31.771179+0000 mgr.y (mgr.24310) 26 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:33.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:33 vm07 bash[17804]: cluster 2026-03-10T11:26:31.771179+0000 mgr.y (mgr.24310) 26 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:33.909 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.909 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:33.910 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.197 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.197 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.197 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.197 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.197 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:33 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: Started Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.198 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:26:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.235Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.235Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.236Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.237Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.255Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.255Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.257Z caller=main.go:518 msg=Listening address=:9093 2026-03-10T11:26:34.598 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:34 vm05 bash[39585]: level=info ts=2026-03-10T11:26:34.257Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-10T11:26:34.698 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:34 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: cluster 2026-03-10T11:26:33.771539+0000 mgr.y (mgr.24310) 27 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: audit 2026-03-10T11:26:34.114025+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: audit 2026-03-10T11:26:34.166667+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: audit 2026-03-10T11:26:34.177325+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: audit 2026-03-10T11:26:34.179973+0000 mgr.y (mgr.24310) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: audit 2026-03-10T11:26:34.181098+0000 mon.b (mon.2) 56 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: audit 2026-03-10T11:26:34.185560+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:35 vm07 bash[17804]: cephadm 2026-03-10T11:26:34.196702+0000 mgr.y (mgr.24310) 29 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T11:26:35.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: cluster 2026-03-10T11:26:33.771539+0000 mgr.y (mgr.24310) 27 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: audit 2026-03-10T11:26:34.114025+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: audit 2026-03-10T11:26:34.166667+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: audit 2026-03-10T11:26:34.177325+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: audit 2026-03-10T11:26:34.179973+0000 mgr.y (mgr.24310) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: audit 2026-03-10T11:26:34.181098+0000 mon.b (mon.2) 56 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: audit 2026-03-10T11:26:34.185560+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:35 vm05 bash[22470]: cephadm 2026-03-10T11:26:34.196702+0000 mgr.y (mgr.24310) 29 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: cluster 2026-03-10T11:26:33.771539+0000 mgr.y (mgr.24310) 27 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: audit 2026-03-10T11:26:34.114025+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: audit 2026-03-10T11:26:34.166667+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: audit 2026-03-10T11:26:34.177325+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: audit 2026-03-10T11:26:34.179973+0000 mgr.y (mgr.24310) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: audit 2026-03-10T11:26:34.181098+0000 mon.b (mon.2) 56 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: audit 2026-03-10T11:26:34.185560+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:35.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:35 vm05 bash[17453]: cephadm 2026-03-10T11:26:34.196702+0000 mgr.y (mgr.24310) 29 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T11:26:36.033 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.033 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.033 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.035 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.035 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.039 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.041 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.042 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:36.415 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:36 vm05 bash[39585]: level=info ts=2026-03-10T11:26:36.238Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.001687236s 2026-03-10T11:26:37.070 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:36 vm05 bash[17453]: cluster 2026-03-10T11:26:35.771833+0000 mgr.y (mgr.24310) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:37.070 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:36 vm05 bash[17453]: audit 2026-03-10T11:26:35.899482+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:37.070 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:36 vm05 bash[22470]: cluster 2026-03-10T11:26:35.771833+0000 mgr.y (mgr.24310) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:37.070 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:36 vm05 bash[22470]: audit 2026-03-10T11:26:35.899482+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:37.102 INFO:teuthology.orchestra.run.vm05.stdout:124554051600 2026-03-10T11:26:37.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:36 vm07 bash[17804]: cluster 2026-03-10T11:26:35.771833+0000 mgr.y (mgr.24310) 30 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:37.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:36 vm07 bash[17804]: audit 2026-03-10T11:26:35.899482+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:37.230 INFO:teuthology.orchestra.run.vm05.stdout:201863462919 2026-03-10T11:26:37.307 INFO:tasks.cephadm.ceph_manager.ceph:need seq 124554051600 got 124554051600 for osd.4 2026-03-10T11:26:37.307 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.415 INFO:tasks.cephadm.ceph_manager.ceph:need seq 201863462919 got 201863462919 for osd.7 2026-03-10T11:26:37.415 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.418 INFO:teuthology.orchestra.run.vm05.stdout:34359738398 2026-03-10T11:26:37.494 INFO:teuthology.orchestra.run.vm05.stdout:103079215123 2026-03-10T11:26:37.538 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738398 got 34359738398 for osd.0 2026-03-10T11:26:37.538 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.642 INFO:tasks.cephadm.ceph_manager.ceph:need seq 103079215123 got 103079215123 for osd.3 2026-03-10T11:26:37.642 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.650 INFO:teuthology.orchestra.run.vm05.stdout:150323855373 2026-03-10T11:26:37.659 INFO:teuthology.orchestra.run.vm05.stdout:55834574874 2026-03-10T11:26:37.661 INFO:teuthology.orchestra.run.vm05.stdout:176093659146 2026-03-10T11:26:37.662 INFO:teuthology.orchestra.run.vm05.stdout:77309411351 2026-03-10T11:26:37.759 INFO:tasks.cephadm.ceph_manager.ceph:need seq 150323855373 got 150323855373 for osd.5 2026-03-10T11:26:37.760 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.788 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574874 got 55834574874 for osd.1 2026-03-10T11:26:37.789 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.820 INFO:tasks.cephadm.ceph_manager.ceph:need seq 176093659146 got 176093659146 for osd.6 2026-03-10T11:26:37.821 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.825 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411351 got 77309411351 for osd.2 2026-03-10T11:26:37.825 DEBUG:teuthology.parallel:result is None 2026-03-10T11:26:37.825 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-10T11:26:37.825 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph pg dump --format=json 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.097260+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.105:0/1313375835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.215502+0000 mon.a (mon.0) 599 : audit [DBG] from='client.? 192.168.123.105:0/3394741972' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.415065+0000 mon.a (mon.0) 600 : audit [DBG] from='client.? 192.168.123.105:0/1926667804' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.488482+0000 mon.a (mon.0) 601 : audit [DBG] from='client.? 192.168.123.105:0/2703317310' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.643877+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1936115584' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.652660+0000 mon.a (mon.0) 602 : audit [DBG] from='client.? 192.168.123.105:0/3100097666' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.658649+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3900577838' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T11:26:37.907 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:37 vm05 bash[17453]: audit 2026-03-10T11:26:37.659213+0000 mon.a (mon.0) 603 : audit [DBG] from='client.? 192.168.123.105:0/2803745116' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.097260+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.105:0/1313375835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.215502+0000 mon.a (mon.0) 599 : audit [DBG] from='client.? 192.168.123.105:0/3394741972' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.415065+0000 mon.a (mon.0) 600 : audit [DBG] from='client.? 192.168.123.105:0/1926667804' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.488482+0000 mon.a (mon.0) 601 : audit [DBG] from='client.? 192.168.123.105:0/2703317310' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.643877+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1936115584' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.652660+0000 mon.a (mon.0) 602 : audit [DBG] from='client.? 192.168.123.105:0/3100097666' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.658649+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3900577838' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T11:26:38.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:37 vm07 bash[17804]: audit 2026-03-10T11:26:37.659213+0000 mon.a (mon.0) 603 : audit [DBG] from='client.? 192.168.123.105:0/2803745116' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.097260+0000 mon.c (mon.1) 20 : audit [DBG] from='client.? 192.168.123.105:0/1313375835' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.215502+0000 mon.a (mon.0) 599 : audit [DBG] from='client.? 192.168.123.105:0/3394741972' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.415065+0000 mon.a (mon.0) 600 : audit [DBG] from='client.? 192.168.123.105:0/1926667804' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.488482+0000 mon.a (mon.0) 601 : audit [DBG] from='client.? 192.168.123.105:0/2703317310' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.643877+0000 mon.c (mon.1) 21 : audit [DBG] from='client.? 192.168.123.105:0/1936115584' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.652660+0000 mon.a (mon.0) 602 : audit [DBG] from='client.? 192.168.123.105:0/3100097666' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.658649+0000 mon.c (mon.1) 22 : audit [DBG] from='client.? 192.168.123.105:0/3900577838' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-10T11:26:38.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:37 vm05 bash[22470]: audit 2026-03-10T11:26:37.659213+0000 mon.a (mon.0) 603 : audit [DBG] from='client.? 192.168.123.105:0/2803745116' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-10T11:26:39.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:39 vm07 bash[17804]: cluster 2026-03-10T11:26:37.772151+0000 mgr.y (mgr.24310) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:39.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:39 vm05 bash[22470]: cluster 2026-03-10T11:26:37.772151+0000 mgr.y (mgr.24310) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:39 vm05 bash[17453]: cluster 2026-03-10T11:26:37.772151+0000 mgr.y (mgr.24310) 31 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:40.495 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:40.872 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:40.876 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-10T11:26:40.934 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":15,"stamp":"2026-03-10T11:26:39.772314+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49608,"kb_used_data":4936,"kb_used_omap":0,"kb_used_meta":44608,"kb_avail":167689784,"statfs":{"total":171765137408,"available":171714338816,"internally_reserved":0,"allocated":5054464,"data_stored":2750974,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45678592},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001892"},"pg_stats":[{"pgid":"1.0","version":"50'87","reported_seq":56,"reported_epoch":50,"state":"active+clean","last_fresh":"2026-03-10T11:26:16.686373+0000","last_change":"2026-03-10T11:26:10.131974+0000","last_active":"2026-03-10T11:26:16.686373+0000","last_peered":"2026-03-10T11:26:16.686373+0000","last_clean":"2026-03-10T11:26:16.686373+0000","last_became_active":"2026-03-10T11:26:04.394155+0000","last_became_peered":"2026-03-10T11:26:04.394155+0000","last_unstale":"2026-03-10T11:26:16.686373+0000","last_undegraded":"2026-03-10T11:26:16.686373+0000","last_fullsized":"2026-03-10T11:26:16.686373+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T11:24:46.045485+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T11:24:46.045485+0000","last_clean_scrub_stamp":"2026-03-10T11:24:46.045485+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:55:08.893414+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":47,"seq":201863462920,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6184,"kb_used_data":864,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961240,"statfs":{"total":21470642176,"available":21464309760,"internally_reserved":0,"allocated":884736,"data_stored":592673,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.61799999999999999}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.65500000000000003}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.74199999999999999}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.72099999999999997}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.66600000000000004}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.64500000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.46700000000000003}]}]},{"osd":6,"up_from":41,"seq":176093659147,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6176,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961248,"statfs":{"total":21470642176,"available":21464317952,"internally_reserved":0,"allocated":876544,"data_stored":592093,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.91000000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.879}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.92400000000000004}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.89100000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63500000000000001}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0629999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75900000000000001}]}]},{"osd":1,"up_from":13,"seq":55834574875,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6432,"kb_used_data":472,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20960992,"statfs":{"total":21470642176,"available":21464055808,"internally_reserved":0,"allocated":483328,"data_stored":194833,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:29 2026","interfaces":[{"interface":"back","average":{"1min":0.66400000000000003,"5min":0.48399999999999999,"15min":0.45400000000000001},"min":{"1min":0.36499999999999999,"5min":0.24399999999999999,"15min":0.24399999999999999},"max":{"1min":1.2809999999999999,"5min":1.2809999999999999,"15min":1.2809999999999999},"last":0.89800000000000002},{"interface":"front","average":{"1min":0.66300000000000003,"5min":0.46600000000000003,"15min":0.433},"min":{"1min":0.26600000000000001,"5min":0.25600000000000001,"15min":0.25600000000000001},"max":{"1min":1.2430000000000001,"5min":1.2430000000000001,"15min":1.2430000000000001},"last":0.77600000000000002}]},{"osd":2,"last update":"Tue Mar 10 11:25:45 2026","interfaces":[{"interface":"back","average":{"1min":0.46500000000000002,"5min":0.46500000000000002,"15min":0.46500000000000002},"min":{"1min":0.221,"5min":0.221,"15min":0.221},"max":{"1min":0.73499999999999999,"5min":0.73499999999999999,"15min":0.73499999999999999},"last":0.75700000000000001},{"interface":"front","average":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"min":{"1min":0.29199999999999998,"5min":0.29199999999999998,"15min":0.29199999999999998},"max":{"1min":0.64900000000000002,"5min":0.64900000000000002,"15min":0.64900000000000002},"last":0.78500000000000003}]},{"osd":3,"last update":"Tue Mar 10 11:26:01 2026","interfaces":[{"interface":"back","average":{"1min":0.55100000000000005,"5min":0.55100000000000005,"15min":0.55100000000000005},"min":{"1min":0.32500000000000001,"5min":0.32500000000000001,"15min":0.32500000000000001},"max":{"1min":0.83599999999999997,"5min":0.83599999999999997,"15min":0.83599999999999997},"last":0.94999999999999996},{"interface":"front","average":{"1min":0.61899999999999999,"5min":0.61899999999999999,"15min":0.61899999999999999},"min":{"1min":0.23400000000000001,"5min":0.23400000000000001,"15min":0.23400000000000001},"max":{"1min":1.4350000000000001,"5min":1.4350000000000001,"15min":1.4350000000000001},"last":0.73999999999999999}]},{"osd":4,"last update":"Tue Mar 10 11:26:17 2026","interfaces":[{"interface":"back","average":{"1min":0.91500000000000004,"5min":0.91500000000000004,"15min":0.91500000000000004},"min":{"1min":0.33500000000000002,"5min":0.33500000000000002,"15min":0.33500000000000002},"max":{"1min":4.8540000000000001,"5min":4.8540000000000001,"15min":4.8540000000000001},"last":0.81699999999999995},{"interface":"front","average":{"1min":0.84799999999999998,"5min":0.84799999999999998,"15min":0.84799999999999998},"min":{"1min":0.432,"5min":0.432,"15min":0.432},"max":{"1min":4.8390000000000004,"5min":4.8390000000000004,"15min":4.8390000000000004},"last":0.93999999999999995}]},{"osd":5,"last update":"Tue Mar 10 11:26:34 2026","interfaces":[{"interface":"back","average":{"1min":0.749,"5min":0.749,"15min":0.749},"min":{"1min":0.42299999999999999,"5min":0.42299999999999999,"15min":0.42299999999999999},"max":{"1min":1.2110000000000001,"5min":1.2110000000000001,"15min":1.2110000000000001},"last":0.95699999999999996},{"interface":"front","average":{"1min":0.69499999999999995,"5min":0.69499999999999995,"15min":0.69499999999999995},"min":{"1min":0.39300000000000002,"5min":0.39300000000000002,"15min":0.39300000000000002},"max":{"1min":1.274,"5min":1.274,"15min":1.274},"last":0.83399999999999996}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0009999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84799999999999998}]}]},{"osd":0,"up_from":8,"seq":34359738399,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6888,"kb_used_data":864,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960536,"statfs":{"total":21470642176,"available":21463588864,"internally_reserved":0,"allocated":884736,"data_stored":592673,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Tue Mar 10 11:26:27 2026","interfaces":[{"interface":"back","average":{"1min":0.55300000000000005,"5min":0.45700000000000002,"15min":0.441},"min":{"1min":0.20000000000000001,"5min":0.191,"15min":0.191},"max":{"1min":0.94099999999999995,"5min":0.94099999999999995,"15min":0.94099999999999995},"last":4.2000000000000002},{"interface":"front","average":{"1min":0.60399999999999998,"5min":0.44500000000000001,"15min":0.41899999999999998},"min":{"1min":0.17499999999999999,"5min":0.17499999999999999,"15min":0.17499999999999999},"max":{"1min":1.2130000000000001,"5min":1.2130000000000001,"15min":1.2130000000000001},"last":0.94099999999999995}]},{"osd":2,"last update":"Tue Mar 10 11:25:47 2026","interfaces":[{"interface":"back","average":{"1min":0.46300000000000002,"5min":0.46300000000000002,"15min":0.46300000000000002},"min":{"1min":0.19,"5min":0.19,"15min":0.19},"max":{"1min":0.745,"5min":0.745,"15min":0.745},"last":4.5819999999999999},{"interface":"front","average":{"1min":0.495,"5min":0.495,"15min":0.495},"min":{"1min":0.14799999999999999,"5min":0.14799999999999999,"15min":0.14799999999999999},"max":{"1min":0.80500000000000005,"5min":0.80500000000000005,"15min":0.80500000000000005},"last":0.85199999999999998}]},{"osd":3,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.51900000000000002,"5min":0.51900000000000002,"15min":0.51900000000000002},"min":{"1min":0.22700000000000001,"5min":0.22700000000000001,"15min":0.22700000000000001},"max":{"1min":0.93100000000000005,"5min":0.93100000000000005,"15min":0.93100000000000005},"last":4.5279999999999996},{"interface":"front","average":{"1min":0.55900000000000005,"5min":0.55900000000000005,"15min":0.55900000000000005},"min":{"1min":0.24399999999999999,"5min":0.24399999999999999,"15min":0.24399999999999999},"max":{"1min":0.92000000000000004,"5min":0.92000000000000004,"15min":0.92000000000000004},"last":0.88}]},{"osd":4,"last update":"Tue Mar 10 11:26:15 2026","interfaces":[{"interface":"back","average":{"1min":0.60899999999999999,"5min":0.60899999999999999,"15min":0.60899999999999999},"min":{"1min":0.377,"5min":0.377,"15min":0.377},"max":{"1min":1.018,"5min":1.018,"15min":1.018},"last":0.91900000000000004},{"interface":"front","average":{"1min":0.61099999999999999,"5min":0.61099999999999999,"15min":0.61099999999999999},"min":{"1min":0.34499999999999997,"5min":0.34499999999999997,"15min":0.34499999999999997},"max":{"1min":0.96999999999999997,"5min":0.96999999999999997,"15min":0.96999999999999997},"last":0.86899999999999999}]},{"osd":5,"last update":"Tue Mar 10 11:26:33 2026","interfaces":[{"interface":"back","average":{"1min":0.86799999999999999,"5min":0.86799999999999999,"15min":0.86799999999999999},"min":{"1min":0.36599999999999999,"5min":0.36599999999999999,"15min":0.36599999999999999},"max":{"1min":4.4729999999999999,"5min":4.4729999999999999,"15min":4.4729999999999999},"last":4.4729999999999999},{"interface":"front","average":{"1min":0.61799999999999999,"5min":0.61799999999999999,"15min":0.61799999999999999},"min":{"1min":0.33300000000000002,"5min":0.33300000000000002,"15min":0.33300000000000002},"max":{"1min":0.94499999999999995,"5min":0.94499999999999995,"15min":0.94499999999999995},"last":0.89000000000000001}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93200000000000005}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.90800000000000003}]}]},{"osd":2,"up_from":18,"seq":77309411352,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6368,"kb_used_data":472,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961056,"statfs":{"total":21470642176,"available":21464121344,"internally_reserved":0,"allocated":483328,"data_stored":194833,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:25:49 2026","interfaces":[{"interface":"back","average":{"1min":0.55100000000000005,"5min":0.55100000000000005,"15min":0.55100000000000005},"min":{"1min":0.33100000000000002,"5min":0.33100000000000002,"15min":0.33100000000000002},"max":{"1min":1.4339999999999999,"5min":1.4339999999999999,"15min":1.4339999999999999},"last":0.61299999999999999},{"interface":"front","average":{"1min":0.51100000000000001,"5min":0.51100000000000001,"15min":0.51100000000000001},"min":{"1min":0.29699999999999999,"5min":0.29699999999999999,"15min":0.29699999999999999},"max":{"1min":1.202,"5min":1.202,"15min":1.202},"last":0.39000000000000001}]},{"osd":1,"last update":"Tue Mar 10 11:25:49 2026","interfaces":[{"interface":"back","average":{"1min":0.54400000000000004,"5min":0.54400000000000004,"15min":0.54400000000000004},"min":{"1min":0.255,"5min":0.255,"15min":0.255},"max":{"1min":1.456,"5min":1.456,"15min":1.456},"last":0.88400000000000001},{"interface":"front","average":{"1min":0.497,"5min":0.497,"15min":0.497},"min":{"1min":0.23599999999999999,"5min":0.23599999999999999,"15min":0.23599999999999999},"max":{"1min":0.85599999999999998,"5min":0.85599999999999998,"15min":0.85599999999999998},"last":0.68600000000000005}]},{"osd":3,"last update":"Tue Mar 10 11:26:01 2026","interfaces":[{"interface":"back","average":{"1min":0.57999999999999996,"5min":0.57999999999999996,"15min":0.57999999999999996},"min":{"1min":0.312,"5min":0.312,"15min":0.312},"max":{"1min":0.77300000000000002,"5min":0.77300000000000002,"15min":0.77300000000000002},"last":0.80400000000000005},{"interface":"front","average":{"1min":0.56599999999999995,"5min":0.56599999999999995,"15min":0.56599999999999995},"min":{"1min":0.248,"5min":0.248,"15min":0.248},"max":{"1min":0.86399999999999999,"5min":0.86399999999999999,"15min":0.86399999999999999},"last":0.63800000000000001}]},{"osd":4,"last update":"Tue Mar 10 11:26:16 2026","interfaces":[{"interface":"back","average":{"1min":0.68500000000000005,"5min":0.68500000000000005,"15min":0.68500000000000005},"min":{"1min":0.376,"5min":0.376,"15min":0.376},"max":{"1min":1.698,"5min":1.698,"15min":1.698},"last":0.752},{"interface":"front","average":{"1min":0.68999999999999995,"5min":0.68999999999999995,"15min":0.68999999999999995},"min":{"1min":0.39900000000000002,"5min":0.39900000000000002,"15min":0.39900000000000002},"max":{"1min":2.1139999999999999,"5min":2.1139999999999999,"15min":2.1139999999999999},"last":0.66000000000000003}]},{"osd":5,"last update":"Tue Mar 10 11:26:31 2026","interfaces":[{"interface":"back","average":{"1min":0.78800000000000003,"5min":0.78800000000000003,"15min":0.78800000000000003},"min":{"1min":0.55200000000000005,"5min":0.55200000000000005,"15min":0.55200000000000005},"max":{"1min":1.8480000000000001,"5min":1.8480000000000001,"15min":1.8480000000000001},"last":0.749},{"interface":"front","average":{"1min":0.75900000000000001,"5min":0.75900000000000001,"15min":0.75900000000000001},"min":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"max":{"1min":1.925,"5min":1.925,"15min":1.925},"last":0.82199999999999995}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.76400000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73699999999999999}]}]},{"osd":3,"up_from":24,"seq":103079215124,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5856,"kb_used_data":472,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961568,"statfs":{"total":21470642176,"available":21464645632,"internally_reserved":0,"allocated":483328,"data_stored":194833,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.47399999999999998,"5min":0.47399999999999998,"15min":0.47399999999999998},"min":{"1min":0.26000000000000001,"5min":0.26000000000000001,"15min":0.26000000000000001},"max":{"1min":0.79400000000000004,"5min":0.79400000000000004,"15min":0.79400000000000004},"last":0.495},{"interface":"front","average":{"1min":0.64000000000000001,"5min":0.64000000000000001,"15min":0.64000000000000001},"min":{"1min":0.313,"5min":0.313,"15min":0.313},"max":{"1min":1.258,"5min":1.258,"15min":1.258},"last":0.438}]},{"osd":1,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.57699999999999996,"5min":0.57699999999999996,"15min":0.57699999999999996},"min":{"1min":0.32300000000000001,"5min":0.32300000000000001,"15min":0.32300000000000001},"max":{"1min":0.873,"5min":0.873,"15min":0.873},"last":0.75},{"interface":"front","average":{"1min":0.59899999999999998,"5min":0.59899999999999998,"15min":0.59899999999999998},"min":{"1min":0.34000000000000002,"5min":0.34000000000000002,"15min":0.34000000000000002},"max":{"1min":0.80500000000000005,"5min":0.80500000000000005,"15min":0.80500000000000005},"last":0.64200000000000002}]},{"osd":2,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.57899999999999996,"5min":0.57899999999999996,"15min":0.57899999999999996},"min":{"1min":0.32700000000000001,"5min":0.32700000000000001,"15min":0.32700000000000001},"max":{"1min":0.97799999999999998,"5min":0.97799999999999998,"15min":0.97799999999999998},"last":0.44600000000000001},{"interface":"front","average":{"1min":0.65200000000000002,"5min":0.65200000000000002,"15min":0.65200000000000002},"min":{"1min":0.34899999999999998,"5min":0.34899999999999998,"15min":0.34899999999999998},"max":{"1min":0.90900000000000003,"5min":0.90900000000000003,"15min":0.90900000000000003},"last":0.45100000000000001}]},{"osd":4,"last update":"Tue Mar 10 11:26:16 2026","interfaces":[{"interface":"back","average":{"1min":0.77200000000000002,"5min":0.77200000000000002,"15min":0.77200000000000002},"min":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"max":{"1min":1.1359999999999999,"5min":1.1359999999999999,"15min":1.1359999999999999},"last":0.53500000000000003},{"interface":"front","average":{"1min":0.71599999999999997,"5min":0.71599999999999997,"15min":0.71599999999999997},"min":{"1min":0.42799999999999999,"5min":0.42799999999999999,"15min":0.42799999999999999},"max":{"1min":1.1919999999999999,"5min":1.1919999999999999,"15min":1.1919999999999999},"last":0.51400000000000001}]},{"osd":5,"last update":"Tue Mar 10 11:26:31 2026","interfaces":[{"interface":"back","average":{"1min":0.73099999999999998,"5min":0.73099999999999998,"15min":0.73099999999999998},"min":{"1min":0.45800000000000002,"5min":0.45800000000000002,"15min":0.45800000000000002},"max":{"1min":1.0780000000000001,"5min":1.0780000000000001,"15min":1.0780000000000001},"last":0.76000000000000001},{"interface":"front","average":{"1min":0.754,"5min":0.754,"15min":0.754},"min":{"1min":0.437,"5min":0.437,"15min":0.437},"max":{"1min":1.1220000000000001,"5min":1.1220000000000001,"15min":1.1220000000000001},"last":0.77300000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.73899999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.48799999999999999}]}]},{"osd":4,"up_from":29,"seq":124554051601,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5852,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961572,"statfs":{"total":21470642176,"available":21464649728,"internally_reserved":0,"allocated":479232,"data_stored":194518,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.65500000000000003,"5min":0.65500000000000003,"15min":0.65500000000000003},"min":{"1min":0.23799999999999999,"5min":0.23799999999999999,"15min":0.23799999999999999},"max":{"1min":1.2869999999999999,"5min":1.2869999999999999,"15min":1.2869999999999999},"last":0.77500000000000002},{"interface":"front","average":{"1min":0.622,"5min":0.622,"15min":0.622},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":1.1240000000000001,"5min":1.1240000000000001,"15min":1.1240000000000001},"last":0.84299999999999997}]},{"osd":1,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.65500000000000003,"5min":0.65500000000000003,"15min":0.65500000000000003},"min":{"1min":0.38300000000000001,"5min":0.38300000000000001,"15min":0.38300000000000001},"max":{"1min":1.0920000000000001,"5min":1.0920000000000001,"15min":1.0920000000000001},"last":0.90700000000000003},{"interface":"front","average":{"1min":0.67900000000000005,"5min":0.67900000000000005,"15min":0.67900000000000005},"min":{"1min":0.309,"5min":0.309,"15min":0.309},"max":{"1min":1.179,"5min":1.179,"15min":1.179},"last":0.74099999999999999}]},{"osd":2,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.67900000000000005,"5min":0.67900000000000005,"15min":0.67900000000000005},"min":{"1min":0.437,"5min":0.437,"15min":0.437},"max":{"1min":1.2030000000000001,"5min":1.2030000000000001,"15min":1.2030000000000001},"last":0.76200000000000001},{"interface":"front","average":{"1min":0.60999999999999999,"5min":0.60999999999999999,"15min":0.60999999999999999},"min":{"1min":0.40899999999999997,"5min":0.40899999999999997,"15min":0.40899999999999997},"max":{"1min":1.1799999999999999,"5min":1.1799999999999999,"15min":1.1799999999999999},"last":0.85199999999999998}]},{"osd":3,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.63300000000000001,"5min":0.63300000000000001,"15min":0.63300000000000001},"min":{"1min":0.35199999999999998,"5min":0.35199999999999998,"15min":0.35199999999999998},"max":{"1min":1.2170000000000001,"5min":1.2170000000000001,"15min":1.2170000000000001},"last":0.85899999999999999},{"interface":"front","average":{"1min":0.69699999999999995,"5min":0.69699999999999995,"15min":0.69699999999999995},"min":{"1min":0.29299999999999998,"5min":0.29299999999999998,"15min":0.29299999999999998},"max":{"1min":1.1000000000000001,"5min":1.1000000000000001,"15min":1.1000000000000001},"last":0.91700000000000004}]},{"osd":5,"last update":"Tue Mar 10 11:26:32 2026","interfaces":[{"interface":"back","average":{"1min":0.68300000000000005,"5min":0.68300000000000005,"15min":0.68300000000000005},"min":{"1min":0.20499999999999999,"5min":0.20499999999999999,"15min":0.20499999999999999},"max":{"1min":1.5940000000000001,"5min":1.5940000000000001,"15min":1.5940000000000001},"last":0.72299999999999998},{"interface":"front","average":{"1min":0.58099999999999996,"5min":0.58099999999999996,"15min":0.58099999999999996},"min":{"1min":0.219,"5min":0.219,"15min":0.219},"max":{"1min":0.99299999999999999,"5min":0.99299999999999999,"15min":0.99299999999999999},"last":0.83099999999999996}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0169999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.1619999999999999}]}]},{"osd":5,"up_from":35,"seq":150323855374,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5852,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961572,"statfs":{"total":21470642176,"available":21464649728,"internally_reserved":0,"allocated":479232,"data_stored":194518,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.69699999999999995,"5min":0.69699999999999995,"15min":0.69699999999999995},"min":{"1min":0.27600000000000002,"5min":0.27600000000000002,"15min":0.27600000000000002},"max":{"1min":1.2789999999999999,"5min":1.2789999999999999,"15min":1.2789999999999999},"last":0.64300000000000002},{"interface":"front","average":{"1min":0.66200000000000003,"5min":0.66200000000000003,"15min":0.66200000000000003},"min":{"1min":0.26100000000000001,"5min":0.26100000000000001,"15min":0.26100000000000001},"max":{"1min":1.3160000000000001,"5min":1.3160000000000001,"15min":1.3160000000000001},"last":0.433}]},{"osd":1,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.68100000000000005,"5min":0.68100000000000005,"15min":0.68100000000000005},"min":{"1min":0.308,"5min":0.308,"15min":0.308},"max":{"1min":1.3819999999999999,"5min":1.3819999999999999,"15min":1.3819999999999999},"last":0.68000000000000005},{"interface":"front","average":{"1min":0.71599999999999997,"5min":0.71599999999999997,"15min":0.71599999999999997},"min":{"1min":0.28199999999999997,"5min":0.28199999999999997,"15min":0.28199999999999997},"max":{"1min":1.5329999999999999,"5min":1.5329999999999999,"15min":1.5329999999999999},"last":0.66400000000000003}]},{"osd":2,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.72399999999999998,"5min":0.72399999999999998,"15min":0.72399999999999998},"min":{"1min":0.36899999999999999,"5min":0.36899999999999999,"15min":0.36899999999999999},"max":{"1min":1.264,"5min":1.264,"15min":1.264},"last":0.65900000000000003},{"interface":"front","average":{"1min":0.67400000000000004,"5min":0.67400000000000004,"15min":0.67400000000000004},"min":{"1min":0.41499999999999998,"5min":0.41499999999999998,"15min":0.41499999999999998},"max":{"1min":1.3260000000000001,"5min":1.3260000000000001,"15min":1.3260000000000001},"last":0.41499999999999998}]},{"osd":3,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.754,"5min":0.754,"15min":0.754},"min":{"1min":0.34000000000000002,"5min":0.34000000000000002,"15min":0.34000000000000002},"max":{"1min":1.4199999999999999,"5min":1.4199999999999999,"15min":1.4199999999999999},"last":0.65100000000000002},{"interface":"front","average":{"1min":0.71999999999999997,"5min":0.71999999999999997,"15min":0.71999999999999997},"min":{"1min":0.34799999999999998,"5min":0.34799999999999998,"15min":0.34799999999999998},"max":{"1min":1.4450000000000001,"5min":1.4450000000000001,"15min":1.4450000000000001},"last":0.60999999999999999}]},{"osd":4,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.66500000000000004,"5min":0.66500000000000004,"15min":0.66500000000000004},"min":{"1min":0.20499999999999999,"5min":0.20499999999999999,"15min":0.20499999999999999},"max":{"1min":1.1000000000000001,"5min":1.1000000000000001,"15min":1.1000000000000001},"last":0.58299999999999996},{"interface":"front","average":{"1min":0.65300000000000002,"5min":0.65300000000000002,"15min":0.65300000000000002},"min":{"1min":0.23499999999999999,"5min":0.23499999999999999,"15min":0.23499999999999999},"max":{"1min":1.056,"5min":1.056,"15min":1.056},"last":0.63}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.60799999999999998}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.39000000000000001}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T11:26:40.935 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph pg dump --format=json 2026-03-10T11:26:41.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:41 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:26:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:26:41.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:41 vm07 bash[17804]: cluster 2026-03-10T11:26:39.772449+0000 mgr.y (mgr.24310) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:41.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:41 vm05 bash[22470]: cluster 2026-03-10T11:26:39.772449+0000 mgr.y (mgr.24310) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:41.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:41 vm05 bash[17453]: cluster 2026-03-10T11:26:39.772449+0000 mgr.y (mgr.24310) 32 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:42.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:42 vm07 bash[17804]: audit 2026-03-10T11:26:40.868248+0000 mgr.y (mgr.24310) 33 : audit [DBG] from='client.14535 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:26:42.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:42 vm05 bash[22470]: audit 2026-03-10T11:26:40.868248+0000 mgr.y (mgr.24310) 33 : audit [DBG] from='client.14535 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:26:42.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:42 vm05 bash[17453]: audit 2026-03-10T11:26:40.868248+0000 mgr.y (mgr.24310) 33 : audit [DBG] from='client.14535 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:26:43.608 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:43.623 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:43 vm05 bash[17453]: cluster 2026-03-10T11:26:41.772724+0000 mgr.y (mgr.24310) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:43.623 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:43 vm05 bash[22470]: cluster 2026-03-10T11:26:41.772724+0000 mgr.y (mgr.24310) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:43.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:43 vm07 bash[17804]: cluster 2026-03-10T11:26:41.772724+0000 mgr.y (mgr.24310) 34 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:43.962 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:43.965 INFO:teuthology.orchestra.run.vm05.stderr:dumped all 2026-03-10T11:26:44.022 INFO:teuthology.orchestra.run.vm05.stdout:{"pg_ready":true,"pg_map":{"version":17,"stamp":"2026-03-10T11:26:43.772843+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":8,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":167739392,"kb_used":49608,"kb_used_data":4936,"kb_used_omap":0,"kb_used_meta":44608,"kb_avail":167689784,"statfs":{"total":171765137408,"available":171714338816,"internally_reserved":0,"allocated":5054464,"data_stored":2750974,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":45678592},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001799"},"pg_stats":[{"pgid":"1.0","version":"50'87","reported_seq":56,"reported_epoch":50,"state":"active+clean","last_fresh":"2026-03-10T11:26:16.686373+0000","last_change":"2026-03-10T11:26:10.131974+0000","last_active":"2026-03-10T11:26:16.686373+0000","last_peered":"2026-03-10T11:26:16.686373+0000","last_clean":"2026-03-10T11:26:16.686373+0000","last_became_active":"2026-03-10T11:26:04.394155+0000","last_became_peered":"2026-03-10T11:26:04.394155+0000","last_unstale":"2026-03-10T11:26:16.686373+0000","last_undegraded":"2026-03-10T11:26:16.686373+0000","last_fullsized":"2026-03-10T11:26:16.686373+0000","mapping_epoch":48,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":49,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-10T11:24:46.045485+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-10T11:24:46.045485+0000","last_clean_scrub_stamp":"2026-03-10T11:24:46.045485+0000","objects_scrubbed":0,"log_size":87,"ondisk_log_size":87,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T11:55:08.893414+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":192,"num_read_kb":288,"num_write":133,"num_write_kb":1372,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":397840,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1204224,"data_stored":1193520,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":87,"ondisk_log_size":87,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":47,"seq":201863462921,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6184,"kb_used_data":864,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961240,"statfs":{"total":21470642176,"available":21464309760,"internally_reserved":0,"allocated":884736,"data_stored":592673,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93000000000000005}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.82999999999999996}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.90300000000000002}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.93899999999999995}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.752}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.85799999999999998}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.80800000000000005}]}]},{"osd":6,"up_from":41,"seq":176093659148,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6176,"kb_used_data":856,"kb_used_omap":0,"kb_used_meta":5312,"kb_avail":20961248,"statfs":{"total":21470642176,"available":21464317952,"internally_reserved":0,"allocated":876544,"data_stored":592093,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5439488},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.91000000000000003}]},{"osd":1,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.879}]},{"osd":2,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.92400000000000004}]},{"osd":3,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.89100000000000001}]},{"osd":4,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.63500000000000001}]},{"osd":5,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0629999999999999}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75900000000000001}]}]},{"osd":1,"up_from":13,"seq":55834574876,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6432,"kb_used_data":472,"kb_used_omap":0,"kb_used_meta":5952,"kb_avail":20960992,"statfs":{"total":21470642176,"available":21464055808,"internally_reserved":0,"allocated":483328,"data_stored":194833,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6094848},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:29 2026","interfaces":[{"interface":"back","average":{"1min":0.66400000000000003,"5min":0.48399999999999999,"15min":0.45400000000000001},"min":{"1min":0.36499999999999999,"5min":0.24399999999999999,"15min":0.24399999999999999},"max":{"1min":1.2809999999999999,"5min":1.2809999999999999,"15min":1.2809999999999999},"last":0.375},{"interface":"front","average":{"1min":0.66300000000000003,"5min":0.46600000000000003,"15min":0.433},"min":{"1min":0.26600000000000001,"5min":0.25600000000000001,"15min":0.25600000000000001},"max":{"1min":1.2430000000000001,"5min":1.2430000000000001,"15min":1.2430000000000001},"last":0.82699999999999996}]},{"osd":2,"last update":"Tue Mar 10 11:25:45 2026","interfaces":[{"interface":"back","average":{"1min":0.46500000000000002,"5min":0.46500000000000002,"15min":0.46500000000000002},"min":{"1min":0.221,"5min":0.221,"15min":0.221},"max":{"1min":0.73499999999999999,"5min":0.73499999999999999,"15min":0.73499999999999999},"last":0.48599999999999999},{"interface":"front","average":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"min":{"1min":0.29199999999999998,"5min":0.29199999999999998,"15min":0.29199999999999998},"max":{"1min":0.64900000000000002,"5min":0.64900000000000002,"15min":0.64900000000000002},"last":0.83499999999999996}]},{"osd":3,"last update":"Tue Mar 10 11:26:01 2026","interfaces":[{"interface":"back","average":{"1min":0.55100000000000005,"5min":0.55100000000000005,"15min":0.55100000000000005},"min":{"1min":0.32500000000000001,"5min":0.32500000000000001,"15min":0.32500000000000001},"max":{"1min":0.83599999999999997,"5min":0.83599999999999997,"15min":0.83599999999999997},"last":0.75800000000000001},{"interface":"front","average":{"1min":0.61899999999999999,"5min":0.61899999999999999,"15min":0.61899999999999999},"min":{"1min":0.23400000000000001,"5min":0.23400000000000001,"15min":0.23400000000000001},"max":{"1min":1.4350000000000001,"5min":1.4350000000000001,"15min":1.4350000000000001},"last":0.50600000000000001}]},{"osd":4,"last update":"Tue Mar 10 11:26:17 2026","interfaces":[{"interface":"back","average":{"1min":0.91500000000000004,"5min":0.91500000000000004,"15min":0.91500000000000004},"min":{"1min":0.33500000000000002,"5min":0.33500000000000002,"15min":0.33500000000000002},"max":{"1min":4.8540000000000001,"5min":4.8540000000000001,"15min":4.8540000000000001},"last":0.81100000000000005},{"interface":"front","average":{"1min":0.84799999999999998,"5min":0.84799999999999998,"15min":0.84799999999999998},"min":{"1min":0.432,"5min":0.432,"15min":0.432},"max":{"1min":4.8390000000000004,"5min":4.8390000000000004,"15min":4.8390000000000004},"last":0.46100000000000002}]},{"osd":5,"last update":"Tue Mar 10 11:26:34 2026","interfaces":[{"interface":"back","average":{"1min":0.749,"5min":0.749,"15min":0.749},"min":{"1min":0.42299999999999999,"5min":0.42299999999999999,"15min":0.42299999999999999},"max":{"1min":1.2110000000000001,"5min":1.2110000000000001,"15min":1.2110000000000001},"last":0.60099999999999998},{"interface":"front","average":{"1min":0.69499999999999995,"5min":0.69499999999999995,"15min":0.69499999999999995},"min":{"1min":0.39300000000000002,"5min":0.39300000000000002,"15min":0.39300000000000002},"max":{"1min":1.274,"5min":1.274,"15min":1.274},"last":0.84199999999999997}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.40899999999999997}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.79200000000000004}]}]},{"osd":0,"up_from":8,"seq":34359738400,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":6888,"kb_used_data":864,"kb_used_omap":0,"kb_used_meta":6016,"kb_avail":20960536,"statfs":{"total":21470642176,"available":21463588864,"internally_reserved":0,"allocated":884736,"data_stored":592673,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6160384},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":1,"last update":"Tue Mar 10 11:26:27 2026","interfaces":[{"interface":"back","average":{"1min":0.55300000000000005,"5min":0.45700000000000002,"15min":0.441},"min":{"1min":0.20000000000000001,"5min":0.191,"15min":0.191},"max":{"1min":0.94099999999999995,"5min":0.94099999999999995,"15min":0.94099999999999995},"last":0.85099999999999998},{"interface":"front","average":{"1min":0.60399999999999998,"5min":0.44500000000000001,"15min":0.41899999999999998},"min":{"1min":0.17499999999999999,"5min":0.17499999999999999,"15min":0.17499999999999999},"max":{"1min":1.2130000000000001,"5min":1.2130000000000001,"15min":1.2130000000000001},"last":0.748}]},{"osd":2,"last update":"Tue Mar 10 11:25:47 2026","interfaces":[{"interface":"back","average":{"1min":0.46300000000000002,"5min":0.46300000000000002,"15min":0.46300000000000002},"min":{"1min":0.19,"5min":0.19,"15min":0.19},"max":{"1min":0.745,"5min":0.745,"15min":0.745},"last":0.435},{"interface":"front","average":{"1min":0.495,"5min":0.495,"15min":0.495},"min":{"1min":0.14799999999999999,"5min":0.14799999999999999,"15min":0.14799999999999999},"max":{"1min":0.80500000000000005,"5min":0.80500000000000005,"15min":0.80500000000000005},"last":0.88400000000000001}]},{"osd":3,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.51900000000000002,"5min":0.51900000000000002,"15min":0.51900000000000002},"min":{"1min":0.22700000000000001,"5min":0.22700000000000001,"15min":0.22700000000000001},"max":{"1min":0.93100000000000005,"5min":0.93100000000000005,"15min":0.93100000000000005},"last":0.80300000000000005},{"interface":"front","average":{"1min":0.55900000000000005,"5min":0.55900000000000005,"15min":0.55900000000000005},"min":{"1min":0.24399999999999999,"5min":0.24399999999999999,"15min":0.24399999999999999},"max":{"1min":0.92000000000000004,"5min":0.92000000000000004,"15min":0.92000000000000004},"last":0.91500000000000004}]},{"osd":4,"last update":"Tue Mar 10 11:26:15 2026","interfaces":[{"interface":"back","average":{"1min":0.60899999999999999,"5min":0.60899999999999999,"15min":0.60899999999999999},"min":{"1min":0.377,"5min":0.377,"15min":0.377},"max":{"1min":1.018,"5min":1.018,"15min":1.018},"last":0.81299999999999994},{"interface":"front","average":{"1min":0.61099999999999999,"5min":0.61099999999999999,"15min":0.61099999999999999},"min":{"1min":0.34499999999999997,"5min":0.34499999999999997,"15min":0.34499999999999997},"max":{"1min":0.96999999999999997,"5min":0.96999999999999997,"15min":0.96999999999999997},"last":0.87}]},{"osd":5,"last update":"Tue Mar 10 11:26:33 2026","interfaces":[{"interface":"back","average":{"1min":0.86799999999999999,"5min":0.86799999999999999,"15min":0.86799999999999999},"min":{"1min":0.36599999999999999,"5min":0.36599999999999999,"15min":0.36599999999999999},"max":{"1min":4.4729999999999999,"5min":4.4729999999999999,"15min":4.4729999999999999},"last":0.72699999999999998},{"interface":"front","average":{"1min":0.61799999999999999,"5min":0.61799999999999999,"15min":0.61799999999999999},"min":{"1min":0.33300000000000002,"5min":0.33300000000000002,"15min":0.33300000000000002},"max":{"1min":0.94499999999999995,"5min":0.94499999999999995,"15min":0.94499999999999995},"last":0.90200000000000002}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.997}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.95999999999999996}]}]},{"osd":2,"up_from":18,"seq":77309411353,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":6368,"kb_used_data":472,"kb_used_omap":0,"kb_used_meta":5888,"kb_avail":20961056,"statfs":{"total":21470642176,"available":21464121344,"internally_reserved":0,"allocated":483328,"data_stored":194833,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":6029312},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:25:49 2026","interfaces":[{"interface":"back","average":{"1min":0.55100000000000005,"5min":0.55100000000000005,"15min":0.55100000000000005},"min":{"1min":0.33100000000000002,"5min":0.33100000000000002,"15min":0.33100000000000002},"max":{"1min":1.4339999999999999,"5min":1.4339999999999999,"15min":1.4339999999999999},"last":0.80500000000000005},{"interface":"front","average":{"1min":0.51100000000000001,"5min":0.51100000000000001,"15min":0.51100000000000001},"min":{"1min":0.29699999999999999,"5min":0.29699999999999999,"15min":0.29699999999999999},"max":{"1min":1.202,"5min":1.202,"15min":1.202},"last":0.82399999999999995}]},{"osd":1,"last update":"Tue Mar 10 11:25:49 2026","interfaces":[{"interface":"back","average":{"1min":0.54400000000000004,"5min":0.54400000000000004,"15min":0.54400000000000004},"min":{"1min":0.255,"5min":0.255,"15min":0.255},"max":{"1min":1.456,"5min":1.456,"15min":1.456},"last":0.72899999999999998},{"interface":"front","average":{"1min":0.497,"5min":0.497,"15min":0.497},"min":{"1min":0.23599999999999999,"5min":0.23599999999999999,"15min":0.23599999999999999},"max":{"1min":0.85599999999999998,"5min":0.85599999999999998,"15min":0.85599999999999998},"last":0.81799999999999995}]},{"osd":3,"last update":"Tue Mar 10 11:26:01 2026","interfaces":[{"interface":"back","average":{"1min":0.57999999999999996,"5min":0.57999999999999996,"15min":0.57999999999999996},"min":{"1min":0.312,"5min":0.312,"15min":0.312},"max":{"1min":0.77300000000000002,"5min":0.77300000000000002,"15min":0.77300000000000002},"last":0.86199999999999999},{"interface":"front","average":{"1min":0.56599999999999995,"5min":0.56599999999999995,"15min":0.56599999999999995},"min":{"1min":0.248,"5min":0.248,"15min":0.248},"max":{"1min":0.86399999999999999,"5min":0.86399999999999999,"15min":0.86399999999999999},"last":0.98799999999999999}]},{"osd":4,"last update":"Tue Mar 10 11:26:16 2026","interfaces":[{"interface":"back","average":{"1min":0.68500000000000005,"5min":0.68500000000000005,"15min":0.68500000000000005},"min":{"1min":0.376,"5min":0.376,"15min":0.376},"max":{"1min":1.698,"5min":1.698,"15min":1.698},"last":0.749},{"interface":"front","average":{"1min":0.68999999999999995,"5min":0.68999999999999995,"15min":0.68999999999999995},"min":{"1min":0.39900000000000002,"5min":0.39900000000000002,"15min":0.39900000000000002},"max":{"1min":2.1139999999999999,"5min":2.1139999999999999,"15min":2.1139999999999999},"last":0.77500000000000002}]},{"osd":5,"last update":"Tue Mar 10 11:26:31 2026","interfaces":[{"interface":"back","average":{"1min":0.78800000000000003,"5min":0.78800000000000003,"15min":0.78800000000000003},"min":{"1min":0.55200000000000005,"5min":0.55200000000000005,"15min":0.55200000000000005},"max":{"1min":1.8480000000000001,"5min":1.8480000000000001,"15min":1.8480000000000001},"last":0.75700000000000001},{"interface":"front","average":{"1min":0.75900000000000001,"5min":0.75900000000000001,"15min":0.75900000000000001},"min":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"max":{"1min":1.925,"5min":1.925,"15min":1.925},"last":0.78900000000000003}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.84499999999999997}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.88}]}]},{"osd":3,"up_from":24,"seq":103079215125,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5856,"kb_used_data":472,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961568,"statfs":{"total":21470642176,"available":21464645632,"internally_reserved":0,"allocated":483328,"data_stored":194833,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.47399999999999998,"5min":0.47399999999999998,"15min":0.47399999999999998},"min":{"1min":0.26000000000000001,"5min":0.26000000000000001,"15min":0.26000000000000001},"max":{"1min":0.79400000000000004,"5min":0.79400000000000004,"15min":0.79400000000000004},"last":0.64900000000000002},{"interface":"front","average":{"1min":0.64000000000000001,"5min":0.64000000000000001,"15min":0.64000000000000001},"min":{"1min":0.313,"5min":0.313,"15min":0.313},"max":{"1min":1.258,"5min":1.258,"15min":1.258},"last":0.85699999999999998}]},{"osd":1,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.57699999999999996,"5min":0.57699999999999996,"15min":0.57699999999999996},"min":{"1min":0.32300000000000001,"5min":0.32300000000000001,"15min":0.32300000000000001},"max":{"1min":0.873,"5min":0.873,"15min":0.873},"last":0.61499999999999999},{"interface":"front","average":{"1min":0.59899999999999998,"5min":0.59899999999999998,"15min":0.59899999999999998},"min":{"1min":0.34000000000000002,"5min":0.34000000000000002,"15min":0.34000000000000002},"max":{"1min":0.80500000000000005,"5min":0.80500000000000005,"15min":0.80500000000000005},"last":0.35099999999999998}]},{"osd":2,"last update":"Tue Mar 10 11:26:05 2026","interfaces":[{"interface":"back","average":{"1min":0.57899999999999996,"5min":0.57899999999999996,"15min":0.57899999999999996},"min":{"1min":0.32700000000000001,"5min":0.32700000000000001,"15min":0.32700000000000001},"max":{"1min":0.97799999999999998,"5min":0.97799999999999998,"15min":0.97799999999999998},"last":0.66900000000000004},{"interface":"front","average":{"1min":0.65200000000000002,"5min":0.65200000000000002,"15min":0.65200000000000002},"min":{"1min":0.34899999999999998,"5min":0.34899999999999998,"15min":0.34899999999999998},"max":{"1min":0.90900000000000003,"5min":0.90900000000000003,"15min":0.90900000000000003},"last":0.78800000000000003}]},{"osd":4,"last update":"Tue Mar 10 11:26:16 2026","interfaces":[{"interface":"back","average":{"1min":0.77200000000000002,"5min":0.77200000000000002,"15min":0.77200000000000002},"min":{"1min":0.47099999999999997,"5min":0.47099999999999997,"15min":0.47099999999999997},"max":{"1min":1.1359999999999999,"5min":1.1359999999999999,"15min":1.1359999999999999},"last":0.92200000000000004},{"interface":"front","average":{"1min":0.71599999999999997,"5min":0.71599999999999997,"15min":0.71599999999999997},"min":{"1min":0.42799999999999999,"5min":0.42799999999999999,"15min":0.42799999999999999},"max":{"1min":1.1919999999999999,"5min":1.1919999999999999,"15min":1.1919999999999999},"last":0.66000000000000003}]},{"osd":5,"last update":"Tue Mar 10 11:26:31 2026","interfaces":[{"interface":"back","average":{"1min":0.73099999999999998,"5min":0.73099999999999998,"15min":0.73099999999999998},"min":{"1min":0.45800000000000002,"5min":0.45800000000000002,"15min":0.45800000000000002},"max":{"1min":1.0780000000000001,"5min":1.0780000000000001,"15min":1.0780000000000001},"last":0.89400000000000002},{"interface":"front","average":{"1min":0.754,"5min":0.754,"15min":0.754},"min":{"1min":0.437,"5min":0.437,"15min":0.437},"max":{"1min":1.1220000000000001,"5min":1.1220000000000001,"15min":1.1220000000000001},"last":0.68899999999999995}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.75600000000000001}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.82399999999999995}]}]},{"osd":4,"up_from":29,"seq":124554051602,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5852,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961572,"statfs":{"total":21470642176,"available":21464649728,"internally_reserved":0,"allocated":479232,"data_stored":194518,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.65500000000000003,"5min":0.65500000000000003,"15min":0.65500000000000003},"min":{"1min":0.23799999999999999,"5min":0.23799999999999999,"15min":0.23799999999999999},"max":{"1min":1.2869999999999999,"5min":1.2869999999999999,"15min":1.2869999999999999},"last":0.497},{"interface":"front","average":{"1min":0.622,"5min":0.622,"15min":0.622},"min":{"1min":0.28799999999999998,"5min":0.28799999999999998,"15min":0.28799999999999998},"max":{"1min":1.1240000000000001,"5min":1.1240000000000001,"15min":1.1240000000000001},"last":0.80500000000000005}]},{"osd":1,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.65500000000000003,"5min":0.65500000000000003,"15min":0.65500000000000003},"min":{"1min":0.38300000000000001,"5min":0.38300000000000001,"15min":0.38300000000000001},"max":{"1min":1.0920000000000001,"5min":1.0920000000000001,"15min":1.0920000000000001},"last":0.63800000000000001},{"interface":"front","average":{"1min":0.67900000000000005,"5min":0.67900000000000005,"15min":0.67900000000000005},"min":{"1min":0.309,"5min":0.309,"15min":0.309},"max":{"1min":1.179,"5min":1.179,"15min":1.179},"last":0.68700000000000006}]},{"osd":2,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.67900000000000005,"5min":0.67900000000000005,"15min":0.67900000000000005},"min":{"1min":0.437,"5min":0.437,"15min":0.437},"max":{"1min":1.2030000000000001,"5min":1.2030000000000001,"15min":1.2030000000000001},"last":0.51500000000000001},{"interface":"front","average":{"1min":0.60999999999999999,"5min":0.60999999999999999,"15min":0.60999999999999999},"min":{"1min":0.40899999999999997,"5min":0.40899999999999997,"15min":0.40899999999999997},"max":{"1min":1.1799999999999999,"5min":1.1799999999999999,"15min":1.1799999999999999},"last":0.68100000000000005}]},{"osd":3,"last update":"Tue Mar 10 11:26:20 2026","interfaces":[{"interface":"back","average":{"1min":0.63300000000000001,"5min":0.63300000000000001,"15min":0.63300000000000001},"min":{"1min":0.35199999999999998,"5min":0.35199999999999998,"15min":0.35199999999999998},"max":{"1min":1.2170000000000001,"5min":1.2170000000000001,"15min":1.2170000000000001},"last":0.75900000000000001},{"interface":"front","average":{"1min":0.69699999999999995,"5min":0.69699999999999995,"15min":0.69699999999999995},"min":{"1min":0.29299999999999998,"5min":0.29299999999999998,"15min":0.29299999999999998},"max":{"1min":1.1000000000000001,"5min":1.1000000000000001,"15min":1.1000000000000001},"last":0.66300000000000003}]},{"osd":5,"last update":"Tue Mar 10 11:26:32 2026","interfaces":[{"interface":"back","average":{"1min":0.68300000000000005,"5min":0.68300000000000005,"15min":0.68300000000000005},"min":{"1min":0.20499999999999999,"5min":0.20499999999999999,"15min":0.20499999999999999},"max":{"1min":1.5940000000000001,"5min":1.5940000000000001,"15min":1.5940000000000001},"last":0.46200000000000002},{"interface":"front","average":{"1min":0.58099999999999996,"5min":0.58099999999999996,"15min":0.58099999999999996},"min":{"1min":0.219,"5min":0.219,"15min":0.219},"max":{"1min":0.99299999999999999,"5min":0.99299999999999999,"15min":0.99299999999999999},"last":0.66600000000000004}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.69199999999999995}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.67400000000000004}]}]},{"osd":5,"up_from":35,"seq":150323855375,"num_pgs":0,"num_osds":1,"num_per_pool_osds":0,"num_per_pool_omap_osds":0,"kb":20967424,"kb_used":5852,"kb_used_data":468,"kb_used_omap":0,"kb_used_meta":5376,"kb_avail":20961572,"statfs":{"total":21470642176,"available":21464649728,"internally_reserved":0,"allocated":479232,"data_stored":194518,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":5505024},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[{"osd":0,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.69699999999999995,"5min":0.69699999999999995,"15min":0.69699999999999995},"min":{"1min":0.27600000000000002,"5min":0.27600000000000002,"15min":0.27600000000000002},"max":{"1min":1.2789999999999999,"5min":1.2789999999999999,"15min":1.2789999999999999},"last":0.50700000000000001},{"interface":"front","average":{"1min":0.66200000000000003,"5min":0.66200000000000003,"15min":0.66200000000000003},"min":{"1min":0.26100000000000001,"5min":0.26100000000000001,"15min":0.26100000000000001},"max":{"1min":1.3160000000000001,"5min":1.3160000000000001,"15min":1.3160000000000001},"last":1.042}]},{"osd":1,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.68100000000000005,"5min":0.68100000000000005,"15min":0.68100000000000005},"min":{"1min":0.308,"5min":0.308,"15min":0.308},"max":{"1min":1.3819999999999999,"5min":1.3819999999999999,"15min":1.3819999999999999},"last":1.075},{"interface":"front","average":{"1min":0.71599999999999997,"5min":0.71599999999999997,"15min":0.71599999999999997},"min":{"1min":0.28199999999999997,"5min":0.28199999999999997,"15min":0.28199999999999997},"max":{"1min":1.5329999999999999,"5min":1.5329999999999999,"15min":1.5329999999999999},"last":0.47399999999999998}]},{"osd":2,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.72399999999999998,"5min":0.72399999999999998,"15min":0.72399999999999998},"min":{"1min":0.36899999999999999,"5min":0.36899999999999999,"15min":0.36899999999999999},"max":{"1min":1.264,"5min":1.264,"15min":1.264},"last":0.51800000000000002},{"interface":"front","average":{"1min":0.67400000000000004,"5min":0.67400000000000004,"15min":0.67400000000000004},"min":{"1min":0.41499999999999998,"5min":0.41499999999999998,"15min":0.41499999999999998},"max":{"1min":1.3260000000000001,"5min":1.3260000000000001,"15min":1.3260000000000001},"last":1.054}]},{"osd":3,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.754,"5min":0.754,"15min":0.754},"min":{"1min":0.34000000000000002,"5min":0.34000000000000002,"15min":0.34000000000000002},"max":{"1min":1.4199999999999999,"5min":1.4199999999999999,"15min":1.4199999999999999},"last":0.624},{"interface":"front","average":{"1min":0.71999999999999997,"5min":0.71999999999999997,"15min":0.71999999999999997},"min":{"1min":0.34799999999999998,"5min":0.34799999999999998,"15min":0.34799999999999998},"max":{"1min":1.4450000000000001,"5min":1.4450000000000001,"15min":1.4450000000000001},"last":1.0640000000000001}]},{"osd":4,"last update":"Tue Mar 10 11:26:35 2026","interfaces":[{"interface":"back","average":{"1min":0.66500000000000004,"5min":0.66500000000000004,"15min":0.66500000000000004},"min":{"1min":0.20499999999999999,"5min":0.20499999999999999,"15min":0.20499999999999999},"max":{"1min":1.1000000000000001,"5min":1.1000000000000001,"15min":1.1000000000000001},"last":0.80300000000000005},{"interface":"front","average":{"1min":0.65300000000000002,"5min":0.65300000000000002,"15min":0.65300000000000002},"min":{"1min":0.23499999999999999,"5min":0.23499999999999999,"15min":0.23499999999999999},"max":{"1min":1.056,"5min":1.056,"15min":1.056},"last":0.71299999999999997}]},{"osd":6,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":0.46400000000000002}]},{"osd":7,"last update":"Thu Jan 1 00:00:00 1970","interfaces":[{"interface":"back","average":{"1min":0,"5min":0,"15min":0},"min":{"1min":0,"5min":0,"15min":0},"max":{"1min":0,"5min":0,"15min":0},"last":1.0860000000000001}]}]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-10T11:26:44.022 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-10T11:26:44.022 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-10T11:26:44.022 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-10T11:26:44.022 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph health --format=json 2026-03-10T11:26:44.245 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:26:43] "GET /metrics HTTP/1.1" 200 191095 "" "Prometheus/2.33.4" 2026-03-10T11:26:44.348 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:44 vm05 bash[39585]: level=info ts=2026-03-10T11:26:44.242Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.005401958s 2026-03-10T11:26:45.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:45 vm07 bash[17804]: cluster 2026-03-10T11:26:43.772983+0000 mgr.y (mgr.24310) 35 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:45.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:45 vm07 bash[17804]: audit 2026-03-10T11:26:43.957820+0000 mgr.y (mgr.24310) 36 : audit [DBG] from='client.24439 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:26:45.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:45 vm05 bash[22470]: cluster 2026-03-10T11:26:43.772983+0000 mgr.y (mgr.24310) 35 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:45.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:45 vm05 bash[22470]: audit 2026-03-10T11:26:43.957820+0000 mgr.y (mgr.24310) 36 : audit [DBG] from='client.24439 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:26:45.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:45 vm05 bash[17453]: cluster 2026-03-10T11:26:43.772983+0000 mgr.y (mgr.24310) 35 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:45.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:45 vm05 bash[17453]: audit 2026-03-10T11:26:43.957820+0000 mgr.y (mgr.24310) 36 : audit [DBG] from='client.24439 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-10T11:26:46.641 INFO:teuthology.orchestra.run.vm05.stderr:Inferring config /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/mon.c/config 2026-03-10T11:26:47.048 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:26:47.048 INFO:teuthology.orchestra.run.vm05.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-10T11:26:47.105 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-10T11:26:47.106 INFO:tasks.cephadm:Setup complete, yielding 2026-03-10T11:26:47.106 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T11:26:47.108 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm05.local 2026-03-10T11:26:47.108 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin realm create --rgw-realm=r --default' 2026-03-10T11:26:48.404 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:48 vm07 bash[17804]: cluster 2026-03-10T11:26:45.773275+0000 mgr.y (mgr.24310) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:48.404 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:48 vm07 bash[17804]: audit 2026-03-10T11:26:47.044996+0000 mon.a (mon.0) 604 : audit [DBG] from='client.? 192.168.123.105:0/804026220' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:26:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:48 vm05 bash[22470]: cluster 2026-03-10T11:26:45.773275+0000 mgr.y (mgr.24310) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:48.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:48 vm05 bash[22470]: audit 2026-03-10T11:26:47.044996+0000 mon.a (mon.0) 604 : audit [DBG] from='client.? 192.168.123.105:0/804026220' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:26:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:48 vm05 bash[17453]: cluster 2026-03-10T11:26:45.773275+0000 mgr.y (mgr.24310) 37 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:48.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:48 vm05 bash[17453]: audit 2026-03-10T11:26:47.044996+0000 mon.a (mon.0) 604 : audit [DBG] from='client.? 192.168.123.105:0/804026220' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-10T11:26:49.052 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.052 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:48 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:26:49.053 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 systemd[1]: Started Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: cluster 2026-03-10T11:26:47.773615+0000 mgr.y (mgr.24310) 38 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: cluster 2026-03-10T11:26:48.509783+0000 mon.a (mon.0) 605 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: audit 2026-03-10T11:26:48.510630+0000 mon.a (mon.0) 606 : audit [INF] from='client.? 192.168.123.105:0/3149942383' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: audit 2026-03-10T11:26:49.075142+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: audit 2026-03-10T11:26:49.078736+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: audit 2026-03-10T11:26:49.079920+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:49.300 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:49 vm07 bash[17804]: audit 2026-03-10T11:26:49.080784+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="App mode production" logger=settings 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create migration_log table" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user table" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.login" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user.email" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_login - v1" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_user_email - v1" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table user to user_v1 - v1" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user table v2" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_login - v2" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_user_email - v2" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table user_v1" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column help_flags1 to user table" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update user table charset" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add last_seen_at column to user" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add missing user data" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_disabled column to user" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index user.login/user.email" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add is_service_account column to user" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create temp user table v1-7" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v1-7" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v1-7" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v1-7" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v1-7" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update temp_user table charset" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_email - v1" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_org_id - v1" 2026-03-10T11:26:49.301 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_code - v1" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_temp_user_status - v1" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create temp_user v2" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_email - v2" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_org_id - v2" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_code - v2" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_temp_user_status - v2" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy temp_user v1 to v2" 2026-03-10T11:26:49.302 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop temp_user_tmp_qwerty" 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: cluster 2026-03-10T11:26:47.773615+0000 mgr.y (mgr.24310) 38 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: cluster 2026-03-10T11:26:48.509783+0000 mon.a (mon.0) 605 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: audit 2026-03-10T11:26:48.510630+0000 mon.a (mon.0) 606 : audit [INF] from='client.? 192.168.123.105:0/3149942383' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: audit 2026-03-10T11:26:49.075142+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: audit 2026-03-10T11:26:49.078736+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: audit 2026-03-10T11:26:49.079920+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:49 vm05 bash[17453]: audit 2026-03-10T11:26:49.080784+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: cluster 2026-03-10T11:26:47.773615+0000 mgr.y (mgr.24310) 38 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: cluster 2026-03-10T11:26:48.509783+0000 mon.a (mon.0) 605 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: audit 2026-03-10T11:26:48.510630+0000 mon.a (mon.0) 606 : audit [INF] from='client.? 192.168.123.105:0/3149942383' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: audit 2026-03-10T11:26:49.075142+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: audit 2026-03-10T11:26:49.078736+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: audit 2026-03-10T11:26:49.079920+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:49.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:49 vm05 bash[22470]: audit 2026-03-10T11:26:49.080784+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Set created for temp users that will otherwise prematurely expire" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create star table" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index star.user_id_dashboard_id" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create org table v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_name - v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create org_user table v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_org_id - v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_org_user_org_id_user_id - v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_org_user_user_id - v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update org table charset" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update org_user table charset" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate all Read Only Viewers to Viewers" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard table" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard.account_id" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_account_id_slug" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_tag table" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_tag.dasboard_id_term" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard to dashboard_v1 - v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard v2" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_org_id - v2" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_org_id_slug - v2" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard v1 to v2" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard.data to mediumtext v1" 2026-03-10T11:26:49.554 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column updated_by in dashboard - v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column created_by in dashboard - v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column gnetId in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for gnetId in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_id in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for plugin_id in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_id in dashboard_tag" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard table charset" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_tag table charset" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column folder_id in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column isFolder in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column has_acl in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in dashboard" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index dashboard_org_id_uid" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_slug" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard title length" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_provisioning v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy dashboard_provisioning v1 to v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop dashboard_provisioning_tmp_qwerty" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add check_sum column" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_title" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="delete tags for deleted dashboards" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="delete stars for deleted dashboards" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for dashboard_is_folder" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index data_source.account_id" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index data_source.account_id_name" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_data_source_account_id - v1" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_data_source_account_id_name - v1" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table data_source to data_source_v1 - v1" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create data_source table v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_data_source_org_id - v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_data_source_org_id_name - v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy data_source v1 to v2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table data_source_v1 #2" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column with_credentials" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add secure json data column" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update data_source table charset" 2026-03-10T11:26:49.555 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update initial version to 1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add read_only data column" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Migrate logging ds to loki ds" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update json_data with nulls" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add uid column" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid value" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index datasource_org_id_uid" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index datasource_org_id_is_default" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.key" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index api_key.account_id_name" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_api_key_account_id - v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_key - v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_api_key_account_id_name - v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table api_key to api_key_v1 - v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create api_key table v2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_api_key_org_id - v2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_key - v2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_api_key_org_id_name - v2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy api_key v1 to v2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table api_key_v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update api_key table charset" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add expires to api_key table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add service account foreign key" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v4" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop table dashboard_snapshot_v4 #1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_snapshot table v5 #2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_key - v5" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_snapshot to mediumtext v2" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update dashboard_snapshot table charset" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external_delete_url to dashboard_snapshots table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add encrypted dashboard json column" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create quota table v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update quota table charset" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create plugin_setting table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column plugin_version to plugin_settings" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update plugin_setting table charset" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create session table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old table playlist_item table" 2026-03-10T11:26:49.556 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist table v2" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create playlist item table v2" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist table charset" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update playlist_item table charset" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v2" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop preferences table v3" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create preferences table v3" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update preferences table charset" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column team_id in preferences" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update team_id column values in preferences" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column week_start in preferences" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create alert table v1" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert org_id & id " 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert state" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert dashboard_id" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v1" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Create alert_rule_tag table v2" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy alert_rule_tag v1 to v2" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop table alert_rule_tag_v1" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification table v1" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column is_default" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column frequency" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column send_reminder" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column disable_resolve_message" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification org_id & name" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert table charset" 2026-03-10T11:26:49.557 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert_notification table charset" 2026-03-10T11:26:49.570 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:26:49.570 INFO:teuthology.orchestra.run.vm05.stdout: "id": "918b585c-d9e4-4f15-bce7-205cf20f8cc7", 2026-03-10T11:26:49.570 INFO:teuthology.orchestra.run.vm05.stdout: "name": "r", 2026-03-10T11:26:49.570 INFO:teuthology.orchestra.run.vm05.stdout: "current_period": "827cf4ee-7286-49cf-bb58-1322a7a3103b", 2026-03-10T11:26:49.570 INFO:teuthology.orchestra.run.vm05.stdout: "epoch": 1 2026-03-10T11:26:49.570 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:26:49.636 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zonegroup create --rgw-zonegroup=default --master --default' 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create notification_journal table v1" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_notification_journal" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_notification_state table v1" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add for to alert table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column uid in alert_notification" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update uid column values in alert_notification" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index alert_notification_org_id_uid" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index org_id_name" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column secure_settings in alert_notification" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert.settings to mediumtext" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_notification_state_alert_id" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add non-unique index alert_rule_tag_alert_id" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop old annotation table v4" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create annotation table v5" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 0 v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 1 v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 2 v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 3 v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index annotation 4 v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update annotation table charset" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column region_id to annotation table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Drop category_id index" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column tags to annotation table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v2" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Create annotation_tag table v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy annotation_tag v2 to v3" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop table annotation_tag_v2" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Update alert annotations and set TEXT to empty" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add created time to annotation table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add updated time to annotation table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for created in annotation table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for updated in annotation table" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Convert existing annotations from seconds to milliseconds" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add epoch_end column" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for epoch_end" 2026-03-10T11:26:49.807 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Make epoch_end the same as epoch" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Move region to single row" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch from annotation table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index for alert_id on annotation table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create test_data table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard_version table v1" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_version.dashboard_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Set dashboard version to 1 where 0" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="save existing dashboard data in dashboard_version table v1" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter dashboard_version.data to mediumtext v1" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create team table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index team.org_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_org_id_name" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create team member table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.org_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_member_org_id_team_id_user_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_member.team_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column email to team table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column external to team_member table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column permission to team_member table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create dashboard acl table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_dashboard_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_user_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_team_id" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_acl_org_id_role" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index dashboard_permission" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="save default acl rules in dashboard_acl table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="delete acl rules for deleted dashboards and folders" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create tag table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index tag.key_value" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create login attempt table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index login_attempt.username" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index IDX_login_attempt_username - v1" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create login_attempt v2" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_login_attempt_username - v2" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="copy login_attempt v1 to v2" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop login_attempt_tmp_qwerty" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth table" 2026-03-10T11:26:49.808 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter user_auth.auth_id to length 190" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth access token to user_auth" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth refresh token to user_auth" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth token type to user_auth" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add OAuth expiry to user_auth" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add index to user_id column in user_auth" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create server_lock table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index server_lock.operation_uid" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user auth token table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.auth_token" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_auth_token.prev_auth_token" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_auth_token.user_id" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add revoked_at to the user auth token" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create cache_data table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index cache_data.cache_key" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create short_url table v1" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index short_url.org_id-uid" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and title columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition on org_id and uid columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition table data column to mediumtext in mysql" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and title columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop index in alert_definition on org_id and uid columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and title columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index in alert_definition on org_id and uid columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column paused in alert_definition" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="delete alert_definition_version table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="recreate alert_definition_version table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="drop alert_definition_version table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_instance table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column current_state_end to alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="remove index def_org_id, current_state on alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_org_id to rule_org_id in alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="rename def_uid to rule_uid in alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index rule_org_id, current_state on alert_instance" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule table" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and title columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id and uid columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule table data column to mediumtext in mysql" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="remove unique index from alert_rule on org_id, title columns" 2026-03-10T11:26:49.809 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-10T11:26:49.810 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add dashboard_uid column to alert_rule" 2026-03-10T11:26:49.810 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add panel_id column to alert_rule" 2026-03-10T11:26:49.810 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-10T11:26:49.810 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create alert_rule_version table" 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "id": "46f8ad0b-cb9b-4489-9d7f-fa5dcd792064", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "name": "default", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "api_name": "default", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "is_master": "true", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "endpoints": [], 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "hostnames": [], 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "hostnames_s3website": [], 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "master_zone": "", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "zones": [], 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "placement_targets": [], 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "default_placement": "", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "realm_id": "918b585c-d9e4-4f15-bce7-205cf20f8cc7", 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "sync_policy": { 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: "groups": [] 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:50.015 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:26:50.058 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=z --master --default' 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column for to alert_rule_version" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column annotations to alert_rule_version" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column labels to alert_rule_version" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id=create_alert_configuration_table 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column default in alert_configuration" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column org_id in alert_configuration" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in alert_configuration table on org_id column" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id=create_ngalert_configuration_table 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index in ngalert_configuration on org_id column" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="clear migration entry \"remove unified alerting data\"" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="move dashboard alerts to unified alerting" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element table v1" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element org_id-folder_id-name-kind" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create library_element_connection table v1" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index library_element_connection element_id-kind-connection_id" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index library_element org_id_uid" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="clone move dashboard alerts to unified alerting" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create data_keys table" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create kv_store table v1" 2026-03-10T11:26:50.199 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index kv_store.org_id-namespace-key" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="update dashboard_uid and panel_id from existing annotations" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create permission table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index permission.role_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_id_action_scope" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create role table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column display_name" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add column group_name" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index role.org_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role_org_id_name" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index role_org_id_uid" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create team role table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.org_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index team_role_org_id_team_id_role_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index team_role.team_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create user role table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.org_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index user_role_org_id_user_id_role_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index user_role.user_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create builtin role table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.role_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.name" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Add column org_id to builtin_role table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add index builtin_role.org_id" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_org_id_role_id_role" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="Remove unique index role_org_id_uid" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index role.uid" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="create seed assignment table" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Executing migration" logger=migrator id="add unique index builtin_role_role_name" 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="migrations completed" logger=migrator performed=381 skipped=0 duration=659.058511ms 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:49 vm07 bash[33470]: t=2026-03-10T11:26:49+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:50 vm07 bash[33470]: t=2026-03-10T11:26:50+0000 lvl=warn msg="[Deprecated] the datasource provisioning config is outdated. please upgrade" logger=provisioning.datasources filename=/etc/grafana/provisioning/datasources/ceph-dashboard.yml 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:50 vm07 bash[33470]: t=2026-03-10T11:26:50+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T11:26:50.200 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:50 vm07 bash[33470]: t=2026-03-10T11:26:50+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T11:26:50.201 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:50 vm07 bash[33470]: t=2026-03-10T11:26:50+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-10T11:26:50.201 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:26:50 vm07 bash[33470]: t=2026-03-10T11:26:50+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "id": "877d5541-229e-45d2-aff6-b5c4be9b16ef", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "name": "z", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "domain_root": "z.rgw.meta:root", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "control_pool": "z.rgw.control", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "gc_pool": "z.rgw.log:gc", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "lc_pool": "z.rgw.log:lc", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "log_pool": "z.rgw.log", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "intent_log_pool": "z.rgw.log:intent", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "usage_log_pool": "z.rgw.log:usage", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "roles_pool": "z.rgw.meta:roles", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "reshard_pool": "z.rgw.log:reshard", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "user_keys_pool": "z.rgw.meta:users.keys", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "user_email_pool": "z.rgw.meta:users.email", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "user_swift_pool": "z.rgw.meta:users.swift", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "user_uid_pool": "z.rgw.meta:users.uid", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "otp_pool": "z.rgw.otp", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "system_key": { 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "access_key": "", 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: "secret_key": "" 2026-03-10T11:26:50.432 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "placement_pools": [ 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: { 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "key": "default-placement", 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "val": { 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "index_pool": "z.rgw.buckets.index", 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "storage_classes": { 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "STANDARD": { 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "data_pool": "z.rgw.buckets.data" 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "data_extra_pool": "z.rgw.buckets.non-ec", 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "index_type": 0 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "realm_id": "918b585c-d9e4-4f15-bce7-205cf20f8cc7", 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout: "notif_pool": "z.rgw.log:notif" 2026-03-10T11:26:50.433 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:26:50.490 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'radosgw-admin period update --rgw-realm=r --commit' 2026-03-10T11:26:50.701 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:50 vm05 bash[22470]: audit 2026-03-10T11:26:49.503548+0000 mon.a (mon.0) 608 : audit [INF] from='client.? 192.168.123.105:0/3149942383' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T11:26:50.701 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:50 vm05 bash[22470]: cluster 2026-03-10T11:26:49.503627+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T11:26:50.701 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:50 vm05 bash[22470]: cluster 2026-03-10T11:26:49.774902+0000 mgr.y (mgr.24310) 39 : cluster [DBG] pgmap v22: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:50.702 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:50 vm05 bash[17453]: audit 2026-03-10T11:26:49.503548+0000 mon.a (mon.0) 608 : audit [INF] from='client.? 192.168.123.105:0/3149942383' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T11:26:50.702 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:50 vm05 bash[17453]: cluster 2026-03-10T11:26:49.503627+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T11:26:50.702 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:50 vm05 bash[17453]: cluster 2026-03-10T11:26:49.774902+0000 mgr.y (mgr.24310) 39 : cluster [DBG] pgmap v22: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:50.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:50 vm07 bash[17804]: audit 2026-03-10T11:26:49.503548+0000 mon.a (mon.0) 608 : audit [INF] from='client.? 192.168.123.105:0/3149942383' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-10T11:26:50.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:50 vm07 bash[17804]: cluster 2026-03-10T11:26:49.503627+0000 mon.a (mon.0) 609 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-10T11:26:50.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:50 vm07 bash[17804]: cluster 2026-03-10T11:26:49.774902+0000 mgr.y (mgr.24310) 39 : cluster [DBG] pgmap v22: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:51.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:51 vm05 bash[22470]: cluster 2026-03-10T11:26:50.517688+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T11:26:51.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:51 vm05 bash[22470]: audit 2026-03-10T11:26:50.909023+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:51.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:51 vm05 bash[17453]: cluster 2026-03-10T11:26:50.517688+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T11:26:51.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:51 vm05 bash[17453]: audit 2026-03-10T11:26:50.909023+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:51.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:51 vm07 bash[17804]: cluster 2026-03-10T11:26:50.517688+0000 mon.a (mon.0) 610 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-10T11:26:51.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:51 vm07 bash[17804]: audit 2026-03-10T11:26:50.909023+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:51.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:26:51 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:26:51] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:52 vm05 bash[22470]: cluster 2026-03-10T11:26:51.775237+0000 mgr.y (mgr.24310) 40 : cluster [DBG] pgmap v24: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:52 vm05 bash[22470]: cluster 2026-03-10T11:26:51.917237+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:52 vm05 bash[22470]: audit 2026-03-10T11:26:51.930461+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.105:0/1294687597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:52 vm05 bash[22470]: audit 2026-03-10T11:26:51.931811+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:52 vm05 bash[22470]: audit 2026-03-10T11:26:52.330362+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:52 vm05 bash[17453]: cluster 2026-03-10T11:26:51.775237+0000 mgr.y (mgr.24310) 40 : cluster [DBG] pgmap v24: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:52 vm05 bash[17453]: cluster 2026-03-10T11:26:51.917237+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:52 vm05 bash[17453]: audit 2026-03-10T11:26:51.930461+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.105:0/1294687597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:52 vm05 bash[17453]: audit 2026-03-10T11:26:51.931811+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T11:26:53.203 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:52 vm05 bash[17453]: audit 2026-03-10T11:26:52.330362+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:53.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:52 vm07 bash[17804]: cluster 2026-03-10T11:26:51.775237+0000 mgr.y (mgr.24310) 40 : cluster [DBG] pgmap v24: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 48 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:26:53.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:52 vm07 bash[17804]: cluster 2026-03-10T11:26:51.917237+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-10T11:26:53.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:52 vm07 bash[17804]: audit 2026-03-10T11:26:51.930461+0000 mon.c (mon.1) 23 : audit [INF] from='client.? 192.168.123.105:0/1294687597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T11:26:53.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:52 vm07 bash[17804]: audit 2026-03-10T11:26:51.931811+0000 mon.a (mon.0) 613 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]: dispatch 2026-03-10T11:26:53.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:52 vm07 bash[17804]: audit 2026-03-10T11:26:52.330362+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 systemd[1]: Stopping Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42727]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-alertmanager.a 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[39585]: level=info ts=2026-03-10T11:26:53.587Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42734]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-alertmanager-a 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42768]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-alertmanager.a 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@alertmanager.a.service: Deactivated successfully. 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 systemd[1]: Stopped Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:53.791 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 systemd[1]: Started Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: audit 2026-03-10T11:26:52.926818+0000 mon.a (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: cluster 2026-03-10T11:26:52.927238+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: audit 2026-03-10T11:26:53.178988+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: audit 2026-03-10T11:26:53.188240+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: cephadm 2026-03-10T11:26:53.191999+0000 mgr.y (mgr.24310) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: cephadm 2026-03-10T11:26:53.193942+0000 mgr.y (mgr.24310) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm05 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: audit 2026-03-10T11:26:53.715816+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: cluster 2026-03-10T11:26:53.941017+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: audit 2026-03-10T11:26:53.943050+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.105:0/1294687597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T11:26:53.987 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:53 vm07 bash[17804]: audit 2026-03-10T11:26:53.943321+0000 mon.a (mon.0) 621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: audit 2026-03-10T11:26:52.926818+0000 mon.a (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: cluster 2026-03-10T11:26:52.927238+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: audit 2026-03-10T11:26:53.178988+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: audit 2026-03-10T11:26:53.188240+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: cephadm 2026-03-10T11:26:53.191999+0000 mgr.y (mgr.24310) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: cephadm 2026-03-10T11:26:53.193942+0000 mgr.y (mgr.24310) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm05 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: audit 2026-03-10T11:26:53.715816+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: cluster 2026-03-10T11:26:53.941017+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: audit 2026-03-10T11:26:53.943050+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.105:0/1294687597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:53 vm05 bash[22470]: audit 2026-03-10T11:26:53.943321+0000 mon.a (mon.0) 621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: audit 2026-03-10T11:26:52.926818+0000 mon.a (mon.0) 615 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.log","app": "rgw"}]': finished 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: cluster 2026-03-10T11:26:52.927238+0000 mon.a (mon.0) 616 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: audit 2026-03-10T11:26:53.178988+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: audit 2026-03-10T11:26:53.188240+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: cephadm 2026-03-10T11:26:53.191999+0000 mgr.y (mgr.24310) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: cephadm 2026-03-10T11:26:53.193942+0000 mgr.y (mgr.24310) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm05 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: audit 2026-03-10T11:26:53.715816+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: cluster 2026-03-10T11:26:53.941017+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: audit 2026-03-10T11:26:53.943050+0000 mon.c (mon.1) 24 : audit [INF] from='client.? 192.168.123.105:0/1294687597' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T11:26:54.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[17453]: audit 2026-03-10T11:26:53.943321+0000 mon.a (mon.0) 621 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]: dispatch 2026-03-10T11:26:54.098 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.849Z caller=main.go:225 msg="Starting Alertmanager" version="(version=0.23.0, branch=HEAD, revision=61046b17771a57cfd4c4a51be370ab930a4d7d54)" 2026-03-10T11:26:54.098 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.849Z caller=main.go:226 build_context="(go=go1.16.7, user=root@e21a959be8d2, date=20210825-10:48:55)" 2026-03-10T11:26:54.098 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.851Z caller=cluster.go:184 component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-10T11:26:54.098 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.852Z caller=cluster.go:671 component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T11:26:54.098 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.873Z caller=coordinator.go:113 component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T11:26:54.098 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.873Z caller=coordinator.go:126 component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T11:26:54.099 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.875Z caller=main.go:518 msg=Listening address=:9093 2026-03-10T11:26:54.099 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:53 vm05 bash[42794]: level=info ts=2026-03-10T11:26:53.875Z caller=tls_config.go:191 msg="TLS is disabled." http2=false 2026-03-10T11:26:54.099 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:26:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:26:53] "GET /metrics HTTP/1.1" 200 191072 "" "Prometheus/2.33.4" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 systemd[1]: Stopping Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33972]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus.a 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.177Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.178Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.178Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33148]: ts=2026-03-10T11:26:54.178Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[33979]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus-a 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34013]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus.a 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service: Deactivated successfully. 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 systemd[1]: Stopped Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:54.239 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 systemd[1]: Started Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:26:54.698 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.385Z caller=main.go:475 level=info msg="No time or size retention was set so using the default time retention" duration=15d 2026-03-10T11:26:54.698 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.385Z caller=main.go:512 level=info msg="Starting Prometheus" version="(version=2.33.4, branch=HEAD, revision=83032011a5d3e6102624fe58241a374a7201fee8)" 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.385Z caller=main.go:517 level=info build_context="(go=go1.17.7, user=root@d13bf69e7be8, date=20220222-16:51:28)" 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.385Z caller=main.go:518 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.386Z caller=main.go:519 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.386Z caller=main.go:520 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.387Z caller=web.go:570 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.388Z caller=main.go:923 level=info msg="Starting TSDB ..." 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.388Z caller=tls_config.go:195 level=info component=web msg="TLS is disabled." http2=false 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.392Z caller=head.go:493 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.392Z caller=head.go:527 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.412µs 2026-03-10T11:26:54.699 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:54 vm07 bash[34037]: ts=2026-03-10T11:26:54.392Z caller=head.go:533 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: cephadm 2026-03-10T11:26:53.719446+0000 mgr.y (mgr.24310) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: cephadm 2026-03-10T11:26:53.723105+0000 mgr.y (mgr.24310) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: cluster 2026-03-10T11:26:53.775608+0000 mgr.y (mgr.24310) 45 : cluster [DBG] pgmap v27: 65 pgs: 32 unknown, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.0 KiB/s rd, 3.7 KiB/s wr, 11 op/s 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.246421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.251309+0000 mgr.y (mgr.24310) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.252506+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.252595+0000 mgr.y (mgr.24310) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.105:9093"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.253929+0000 mon.b (mon.2) 61 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.105:9093"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.258548+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.264748+0000 mgr.y (mgr.24310) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.265887+0000 mgr.y (mgr.24310) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.107:3000"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.265973+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.267225+0000 mon.b (mon.2) 63 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.107:3000"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.273323+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.281264+0000 mgr.y (mgr.24310) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.282456+0000 mon.b (mon.2) 64 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.282901+0000 mgr.y (mgr.24310) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.107:9095"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.284205+0000 mon.b (mon.2) 65 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.107:9095"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.288124+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.295259+0000 mon.b (mon.2) 66 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.297922+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.299436+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: audit 2026-03-10T11:26:54.932966+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:55 vm05 bash[22470]: cluster 2026-03-10T11:26:54.933139+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T11:26:55.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: cephadm 2026-03-10T11:26:53.719446+0000 mgr.y (mgr.24310) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: cephadm 2026-03-10T11:26:53.723105+0000 mgr.y (mgr.24310) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: cluster 2026-03-10T11:26:53.775608+0000 mgr.y (mgr.24310) 45 : cluster [DBG] pgmap v27: 65 pgs: 32 unknown, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.0 KiB/s rd, 3.7 KiB/s wr, 11 op/s 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.246421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.251309+0000 mgr.y (mgr.24310) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.252506+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.252595+0000 mgr.y (mgr.24310) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.105:9093"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.253929+0000 mon.b (mon.2) 61 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.105:9093"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.258548+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.264748+0000 mgr.y (mgr.24310) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.265887+0000 mgr.y (mgr.24310) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.107:3000"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.265973+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.267225+0000 mon.b (mon.2) 63 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.107:3000"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.273323+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.281264+0000 mgr.y (mgr.24310) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.282456+0000 mon.b (mon.2) 64 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.282901+0000 mgr.y (mgr.24310) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.107:9095"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.284205+0000 mon.b (mon.2) 65 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.107:9095"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.288124+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.295259+0000 mon.b (mon.2) 66 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.297922+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.299436+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: audit 2026-03-10T11:26:54.932966+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-10T11:26:55.599 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[17453]: cluster 2026-03-10T11:26:54.933139+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: cephadm 2026-03-10T11:26:53.719446+0000 mgr.y (mgr.24310) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: cephadm 2026-03-10T11:26:53.723105+0000 mgr.y (mgr.24310) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: cluster 2026-03-10T11:26:53.775608+0000 mgr.y (mgr.24310) 45 : cluster [DBG] pgmap v27: 65 pgs: 32 unknown, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.0 KiB/s rd, 3.7 KiB/s wr, 11 op/s 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.246421+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.251309+0000 mgr.y (mgr.24310) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.252506+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.252595+0000 mgr.y (mgr.24310) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.105:9093"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.253929+0000 mon.b (mon.2) 61 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.123.105:9093"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.258548+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.264748+0000 mgr.y (mgr.24310) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.265887+0000 mgr.y (mgr.24310) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.107:3000"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.265973+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:26:55.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.267225+0000 mon.b (mon.2) 63 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.123.107:3000"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.273323+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.281264+0000 mgr.y (mgr.24310) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.282456+0000 mon.b (mon.2) 64 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.282901+0000 mgr.y (mgr.24310) 51 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.107:9095"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.284205+0000 mon.b (mon.2) 65 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.123.107:9095"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.288124+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.295259+0000 mon.b (mon.2) 66 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.297922+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.299436+0000 mon.b (mon.2) 68 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: audit 2026-03-10T11:26:54.932966+0000 mon.a (mon.0) 626 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.control","app": "rgw"}]': finished 2026-03-10T11:26:55.699 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:55 vm07 bash[17804]: cluster 2026-03-10T11:26:54.933139+0000 mon.a (mon.0) 627 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.766Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.767Z caller=head.go:604 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.767Z caller=head.go:610 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=23.774µs wal_replay_duration=1.374833361s total_replay_duration=1.374865841s 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.768Z caller=main.go:944 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.768Z caller=main.go:947 level=info msg="TSDB started" 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.768Z caller=main.go:1128 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.785Z caller=main.go:1165 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=16.576891ms db_storage=541ns remote_storage=1.443µs web_handler=581ns query_engine=701ns scrape=1.389569ms scrape_sd=39.955µs notify=40.316µs notify_sd=9.398µs rules=14.886829ms 2026-03-10T11:26:56.198 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:26:55 vm07 bash[34037]: ts=2026-03-10T11:26:55.785Z caller=main.go:896 level=info msg="Server is ready to receive web requests." 2026-03-10T11:26:56.348 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:26:55 vm05 bash[42794]: level=info ts=2026-03-10T11:26:55.852Z caller=cluster.go:696 component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000258282s 2026-03-10T11:26:57.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:57 vm05 bash[17453]: cluster 2026-03-10T11:26:55.776054+0000 mgr.y (mgr.24310) 52 : cluster [DBG] pgmap v30: 97 pgs: 64 unknown, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T11:26:57.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:57 vm05 bash[17453]: cluster 2026-03-10T11:26:55.947085+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T11:26:57.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:57 vm05 bash[17453]: audit 2026-03-10T11:26:55.950817+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T11:26:57.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: cluster 2026-03-10T11:26:55.776054+0000 mgr.y (mgr.24310) 52 : cluster [DBG] pgmap v30: 97 pgs: 64 unknown, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T11:26:57.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: cluster 2026-03-10T11:26:55.947085+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T11:26:57.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:55.950817+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T11:26:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:56 vm07 bash[17804]: cluster 2026-03-10T11:26:55.776054+0000 mgr.y (mgr.24310) 52 : cluster [DBG] pgmap v30: 97 pgs: 64 unknown, 33 active+clean; 451 KiB data, 50 MiB used, 160 GiB / 160 GiB avail; 7.5 KiB/s rd, 4.0 KiB/s wr, 12 op/s 2026-03-10T11:26:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:56 vm07 bash[17804]: cluster 2026-03-10T11:26:55.947085+0000 mon.a (mon.0) 628 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-10T11:26:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:56 vm07 bash[17804]: audit 2026-03-10T11:26:55.950817+0000 mon.a (mon.0) 629 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]: dispatch 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:56.964926+0000 mon.a (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: cluster 2026-03-10T11:26:56.965697+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:57.116285+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:57.406155+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:57.477899+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:57.484741+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:57.954713+0000 mon.a (mon.0) 636 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: cluster 2026-03-10T11:26:57.954788+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:57 vm05 bash[22470]: audit 2026-03-10T11:26:57.955559+0000 mon.a (mon.0) 638 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:56.964926+0000 mon.a (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: cluster 2026-03-10T11:26:56.965697+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:57.116285+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:57.406155+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:57.477899+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:57.484741+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:57.954713+0000 mon.a (mon.0) 636 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: cluster 2026-03-10T11:26:57.954788+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T11:26:58.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:58 vm05 bash[17453]: audit 2026-03-10T11:26:57.955559+0000 mon.a (mon.0) 638 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:56.964926+0000 mon.a (mon.0) 630 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "z.rgw.meta","app": "rgw"}]': finished 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: cluster 2026-03-10T11:26:56.965697+0000 mon.a (mon.0) 631 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:57.116285+0000 mon.a (mon.0) 632 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:57.406155+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:57.477899+0000 mon.a (mon.0) 634 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:57.484741+0000 mon.a (mon.0) 635 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:57.954713+0000 mon.a (mon.0) 636 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: cluster 2026-03-10T11:26:57.954788+0000 mon.a (mon.0) 637 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-10T11:26:58.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:58 vm07 bash[17804]: audit 2026-03-10T11:26:57.955559+0000 mon.a (mon.0) 638 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]: dispatch 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "id": "e624ba72-867d-4a02-b5d7-6b7c7f1bd9b5", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "epoch": 1, 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "predecessor_uuid": "827cf4ee-7286-49cf-bb58-1322a7a3103b", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "sync_status": [], 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "period_map": { 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "id": "e624ba72-867d-4a02-b5d7-6b7c7f1bd9b5", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "zonegroups": [ 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: { 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "id": "46f8ad0b-cb9b-4489-9d7f-fa5dcd792064", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "name": "default", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "api_name": "default", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "is_master": "true", 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "endpoints": [], 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "hostnames": [], 2026-03-10T11:26:59.213 INFO:teuthology.orchestra.run.vm05.stdout: "hostnames_s3website": [], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "master_zone": "877d5541-229e-45d2-aff6-b5c4be9b16ef", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "zones": [ 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "id": "877d5541-229e-45d2-aff6-b5c4be9b16ef", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "name": "z", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "endpoints": [], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "log_meta": "false", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "log_data": "false", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "bucket_index_max_shards": 11, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "read_only": "false", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "tier_type": "", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "sync_from_all": "true", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "sync_from": [], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "redirect_zone": "" 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "placement_targets": [ 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "name": "default-placement", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "tags": [], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "storage_classes": [ 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "STANDARD" 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "default_placement": "default-placement", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "realm_id": "918b585c-d9e4-4f15-bce7-205cf20f8cc7", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "sync_policy": { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "groups": [] 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "short_zone_ids": [ 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "key": "877d5541-229e-45d2-aff6-b5c4be9b16ef", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "val": 3839893394 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "master_zonegroup": "46f8ad0b-cb9b-4489-9d7f-fa5dcd792064", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "master_zone": "877d5541-229e-45d2-aff6-b5c4be9b16ef", 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "period_config": { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "bucket_quota": { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "enabled": false, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "check_on_raw": false, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_size": -1, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_size_kb": 0, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_objects": -1 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "user_quota": { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "enabled": false, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "check_on_raw": false, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_size": -1, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_size_kb": 0, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_objects": -1 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "user_ratelimit": { 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_read_ops": 0, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_write_ops": 0, 2026-03-10T11:26:59.214 INFO:teuthology.orchestra.run.vm05.stdout: "max_read_bytes": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_write_bytes": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "enabled": false 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "bucket_ratelimit": { 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_read_ops": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_write_ops": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_read_bytes": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_write_bytes": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "enabled": false 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "anonymous_ratelimit": { 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_read_ops": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_write_ops": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_read_bytes": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "max_write_bytes": 0, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "enabled": false 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "realm_id": "918b585c-d9e4-4f15-bce7-205cf20f8cc7", 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "realm_name": "r", 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout: "realm_epoch": 2 2026-03-10T11:26:59.215 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:26:59.285 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:59 vm05 bash[17453]: cluster 2026-03-10T11:26:57.776424+0000 mgr.y (mgr.24310) 53 : cluster [DBG] pgmap v33: 129 pgs: 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T11:26:59.285 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:59 vm05 bash[17453]: audit 2026-03-10T11:26:58.960381+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-10T11:26:59.285 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:26:59 vm05 bash[17453]: cluster 2026-03-10T11:26:58.960428+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T11:26:59.285 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:59 vm05 bash[22470]: cluster 2026-03-10T11:26:57.776424+0000 mgr.y (mgr.24310) 53 : cluster [DBG] pgmap v33: 129 pgs: 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T11:26:59.285 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:59 vm05 bash[22470]: audit 2026-03-10T11:26:58.960381+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-10T11:26:59.285 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:26:59 vm05 bash[22470]: cluster 2026-03-10T11:26:58.960428+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T11:26:59.286 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply rgw foo --realm r --zone z --placement=2 --port=8000' 2026-03-10T11:26:59.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:59 vm07 bash[17804]: cluster 2026-03-10T11:26:57.776424+0000 mgr.y (mgr.24310) 53 : cluster [DBG] pgmap v33: 129 pgs: 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T11:26:59.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:59 vm07 bash[17804]: audit 2026-03-10T11:26:58.960381+0000 mon.a (mon.0) 639 : audit [INF] from='client.? 192.168.123.105:0/4074079121' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "z.rgw.meta", "var": "pg_num_min", "val": "8"}]': finished 2026-03-10T11:26:59.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:26:59 vm07 bash[17804]: cluster 2026-03-10T11:26:58.960428+0000 mon.a (mon.0) 640 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-10T11:26:59.726 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled rgw.foo update... 2026-03-10T11:26:59.793 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph osd pool create foo' 2026-03-10T11:27:00.761 INFO:teuthology.orchestra.run.vm05.stderr:pool 'foo' created 2026-03-10T11:27:00.822 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'rbd pool init foo' 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: audit 2026-03-10T11:26:59.714753+0000 mgr.y (mgr.24310) 54 : audit [DBG] from='client.14595 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: cephadm 2026-03-10T11:26:59.716182+0000 mgr.y (mgr.24310) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: audit 2026-03-10T11:26:59.721033+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: audit 2026-03-10T11:26:59.757031+0000 mon.b (mon.2) 69 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: audit 2026-03-10T11:26:59.758654+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: audit 2026-03-10T11:26:59.760829+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: cluster 2026-03-10T11:26:59.776815+0000 mgr.y (mgr.24310) 56 : cluster [DBG] pgmap v36: 129 pgs: 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:00 vm05 bash[22470]: audit 2026-03-10T11:27:00.262583+0000 mon.a (mon.0) 642 : audit [INF] from='client.? 192.168.123.105:0/3581587346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: audit 2026-03-10T11:26:59.714753+0000 mgr.y (mgr.24310) 54 : audit [DBG] from='client.14595 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:27:00.998 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: cephadm 2026-03-10T11:26:59.716182+0000 mgr.y (mgr.24310) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T11:27:00.999 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: audit 2026-03-10T11:26:59.721033+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:00.999 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: audit 2026-03-10T11:26:59.757031+0000 mon.b (mon.2) 69 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:00.999 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: audit 2026-03-10T11:26:59.758654+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:00.999 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: audit 2026-03-10T11:26:59.760829+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:00.999 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: cluster 2026-03-10T11:26:59.776815+0000 mgr.y (mgr.24310) 56 : cluster [DBG] pgmap v36: 129 pgs: 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T11:27:00.999 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:00 vm05 bash[17453]: audit 2026-03-10T11:27:00.262583+0000 mon.a (mon.0) 642 : audit [INF] from='client.? 192.168.123.105:0/3581587346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: audit 2026-03-10T11:26:59.714753+0000 mgr.y (mgr.24310) 54 : audit [DBG] from='client.14595 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo", "realm": "r", "zone": "z", "placement": "2", "port": 8000, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: cephadm 2026-03-10T11:26:59.716182+0000 mgr.y (mgr.24310) 55 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: audit 2026-03-10T11:26:59.721033+0000 mon.a (mon.0) 641 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: audit 2026-03-10T11:26:59.757031+0000 mon.b (mon.2) 69 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: audit 2026-03-10T11:26:59.758654+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: audit 2026-03-10T11:26:59.760829+0000 mon.b (mon.2) 71 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: cluster 2026-03-10T11:26:59.776815+0000 mgr.y (mgr.24310) 56 : cluster [DBG] pgmap v36: 129 pgs: 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s 2026-03-10T11:27:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:00 vm07 bash[17804]: audit 2026-03-10T11:27:00.262583+0000 mon.a (mon.0) 642 : audit [INF] from='client.? 192.168.123.105:0/3581587346' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "foo"}]: dispatch 2026-03-10T11:27:01.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:01 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:27:01.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:01 vm07 bash[17804]: audit 2026-03-10T11:27:00.747960+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.105:0/3581587346' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T11:27:01.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:01 vm07 bash[17804]: cluster 2026-03-10T11:27:00.748027+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T11:27:01.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:01 vm07 bash[17804]: audit 2026-03-10T11:27:01.130769+0000 mon.c (mon.1) 25 : audit [INF] from='client.? 192.168.123.105:0/3590369396' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T11:27:01.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:01 vm07 bash[17804]: audit 2026-03-10T11:27:01.131266+0000 mon.a (mon.0) 645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:01 vm05 bash[22470]: audit 2026-03-10T11:27:00.747960+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.105:0/3581587346' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:01 vm05 bash[22470]: cluster 2026-03-10T11:27:00.748027+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:01 vm05 bash[22470]: audit 2026-03-10T11:27:01.130769+0000 mon.c (mon.1) 25 : audit [INF] from='client.? 192.168.123.105:0/3590369396' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:01 vm05 bash[22470]: audit 2026-03-10T11:27:01.131266+0000 mon.a (mon.0) 645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:01 vm05 bash[17453]: audit 2026-03-10T11:27:00.747960+0000 mon.a (mon.0) 643 : audit [INF] from='client.? 192.168.123.105:0/3581587346' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "foo"}]': finished 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:01 vm05 bash[17453]: cluster 2026-03-10T11:27:00.748027+0000 mon.a (mon.0) 644 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:01 vm05 bash[17453]: audit 2026-03-10T11:27:01.130769+0000 mon.c (mon.1) 25 : audit [INF] from='client.? 192.168.123.105:0/3590369396' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T11:27:02.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:01 vm05 bash[17453]: audit 2026-03-10T11:27:01.131266+0000 mon.a (mon.0) 645 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]: dispatch 2026-03-10T11:27:03.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:02 vm05 bash[22470]: audit 2026-03-10T11:27:01.746319+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T11:27:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:02 vm05 bash[22470]: cluster 2026-03-10T11:27:01.746386+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T11:27:03.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:02 vm05 bash[22470]: cluster 2026-03-10T11:27:01.777154+0000 mgr.y (mgr.24310) 57 : cluster [DBG] pgmap v39: 161 pgs: 32 unknown, 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:27:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:02 vm05 bash[17453]: audit 2026-03-10T11:27:01.746319+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T11:27:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:02 vm05 bash[17453]: cluster 2026-03-10T11:27:01.746386+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T11:27:03.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:02 vm05 bash[17453]: cluster 2026-03-10T11:27:01.777154+0000 mgr.y (mgr.24310) 57 : cluster [DBG] pgmap v39: 161 pgs: 32 unknown, 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:27:03.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:02 vm07 bash[17804]: audit 2026-03-10T11:27:01.746319+0000 mon.a (mon.0) 646 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "foo","app": "rbd"}]': finished 2026-03-10T11:27:03.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:02 vm07 bash[17804]: cluster 2026-03-10T11:27:01.746386+0000 mon.a (mon.0) 647 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-10T11:27:03.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:02 vm07 bash[17804]: cluster 2026-03-10T11:27:01.777154+0000 mgr.y (mgr.24310) 57 : cluster [DBG] pgmap v39: 161 pgs: 32 unknown, 32 creating+peering, 97 active+clean; 451 KiB data, 52 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:27:03.966 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch apply iscsi foo u p' 2026-03-10T11:27:04.074 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:03 vm05 bash[17453]: cluster 2026-03-10T11:27:02.757595+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T11:27:04.074 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:03 vm05 bash[17453]: audit 2026-03-10T11:27:02.835445+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.074 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:03 vm05 bash[22470]: cluster 2026-03-10T11:27:02.757595+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T11:27:04.074 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:03 vm05 bash[22470]: audit 2026-03-10T11:27:02.835445+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.074 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:03 vm05 bash[42794]: level=info ts=2026-03-10T11:27:03.855Z caller=cluster.go:688 component=cluster msg="gossip settled; proceeding" elapsed=10.003502705s 2026-03-10T11:27:04.075 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:03] "GET /metrics HTTP/1.1" 200 197429 "" "Prometheus/2.33.4" 2026-03-10T11:27:04.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:03 vm07 bash[17804]: cluster 2026-03-10T11:27:02.757595+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-10T11:27:04.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:03 vm07 bash[17804]: audit 2026-03-10T11:27:02.835445+0000 mon.a (mon.0) 649 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.554 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled iscsi.foo update... 2026-03-10T11:27:04.680 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.681 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.815 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: cluster 2026-03-10T11:27:03.777518+0000 mgr.y (mgr.24310) 58 : cluster [DBG] pgmap v41: 161 pgs: 10 creating+activating, 151 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: cluster 2026-03-10T11:27:03.881520+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.881745+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.917684+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.925037+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.951771+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: cephadm 2026-03-10T11:27:03.952814+0000 mgr.y (mgr.24310) 59 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.961947+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.964000+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.964643+0000 mon.b (mon.2) 72 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.972180+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.985836+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: cephadm 2026-03-10T11:27:03.988129+0000 mgr.y (mgr.24310) 60 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:03.988584+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.549714+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.781345+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.784364+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.785283+0000 mon.b (mon.2) 74 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.797735+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.815938+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.935 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:04 vm05 bash[17453]: audit 2026-03-10T11:27:04.819544+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: cluster 2026-03-10T11:27:03.777518+0000 mgr.y (mgr.24310) 58 : cluster [DBG] pgmap v41: 161 pgs: 10 creating+activating, 151 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: cluster 2026-03-10T11:27:03.881520+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.881745+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.917684+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.925037+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.951771+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: cephadm 2026-03-10T11:27:03.952814+0000 mgr.y (mgr.24310) 59 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.961947+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.964000+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.964643+0000 mon.b (mon.2) 72 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.972180+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.985836+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: cephadm 2026-03-10T11:27:03.988129+0000 mgr.y (mgr.24310) 60 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:03.988584+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.549714+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.781345+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.784364+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.785283+0000 mon.b (mon.2) 74 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.797735+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.815938+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:04.936 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:04 vm05 bash[22470]: audit 2026-03-10T11:27:04.819544+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:04.936 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:04 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: cluster 2026-03-10T11:27:03.777518+0000 mgr.y (mgr.24310) 58 : cluster [DBG] pgmap v41: 161 pgs: 10 creating+activating, 151 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: cluster 2026-03-10T11:27:03.881520+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.881745+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.917684+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.925037+0000 mon.a (mon.0) 653 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.951771+0000 mon.a (mon.0) 654 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: cephadm 2026-03-10T11:27:03.952814+0000 mgr.y (mgr.24310) 59 : cephadm [INF] Saving service rgw.foo spec with placement count:2 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.961947+0000 mon.a (mon.0) 655 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.964000+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.964643+0000 mon.b (mon.2) 72 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.972180+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.985836+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: cephadm 2026-03-10T11:27:03.988129+0000 mgr.y (mgr.24310) 60 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:03.988584+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.549714+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.781345+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.784364+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.785283+0000 mon.b (mon.2) 74 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.797735+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.815938+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.091 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:04 vm07 bash[17804]: audit 2026-03-10T11:27:04.819544+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:05.419 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.419 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.683 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.683 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.683 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.684 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.684 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.684 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.684 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.684 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.684 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:27:05 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: audit 2026-03-10T11:27:04.539830+0000 mgr.y (mgr.24310) 61 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: cephadm 2026-03-10T11:27:04.540703+0000 mgr.y (mgr.24310) 62 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: cephadm 2026-03-10T11:27:04.818858+0000 mgr.y (mgr.24310) 63 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: audit 2026-03-10T11:27:05.691009+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: audit 2026-03-10T11:27:05.695514+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: audit 2026-03-10T11:27:05.696813+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:05.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:05 vm07 bash[17804]: audit 2026-03-10T11:27:05.698832+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:06.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: audit 2026-03-10T11:27:04.539830+0000 mgr.y (mgr.24310) 61 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: cephadm 2026-03-10T11:27:04.540703+0000 mgr.y (mgr.24310) 62 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: cephadm 2026-03-10T11:27:04.818858+0000 mgr.y (mgr.24310) 63 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: audit 2026-03-10T11:27:05.691009+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: audit 2026-03-10T11:27:05.695514+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: audit 2026-03-10T11:27:05.696813+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:05 vm05 bash[22470]: audit 2026-03-10T11:27:05.698832+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: audit 2026-03-10T11:27:04.539830+0000 mgr.y (mgr.24310) 61 : audit [DBG] from='client.14607 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "foo", "api_user": "u", "api_password": "p", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: cephadm 2026-03-10T11:27:04.540703+0000 mgr.y (mgr.24310) 62 : cephadm [INF] Saving service iscsi.foo spec with placement count:1 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: cephadm 2026-03-10T11:27:04.818858+0000 mgr.y (mgr.24310) 63 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: audit 2026-03-10T11:27:05.691009+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: audit 2026-03-10T11:27:05.695514+0000 mon.b (mon.2) 76 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: audit 2026-03-10T11:27:05.696813+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:06.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:05 vm05 bash[17453]: audit 2026-03-10T11:27:05.698832+0000 mon.b (mon.2) 78 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:07.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:06 vm07 bash[17804]: cluster 2026-03-10T11:27:05.777861+0000 mgr.y (mgr.24310) 64 : cluster [DBG] pgmap v43: 161 pgs: 10 creating+activating, 151 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T11:27:07.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:06 vm07 bash[17804]: audit 2026-03-10T11:27:05.919517+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:07.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:06 vm05 bash[22470]: cluster 2026-03-10T11:27:05.777861+0000 mgr.y (mgr.24310) 64 : cluster [DBG] pgmap v43: 161 pgs: 10 creating+activating, 151 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T11:27:07.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:06 vm05 bash[22470]: audit 2026-03-10T11:27:05.919517+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:07.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:06 vm05 bash[17453]: cluster 2026-03-10T11:27:05.777861+0000 mgr.y (mgr.24310) 64 : cluster [DBG] pgmap v43: 161 pgs: 10 creating+activating, 151 active+clean; 453 KiB data, 54 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1.4 KiB/s wr, 3 op/s 2026-03-10T11:27:07.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:06 vm05 bash[17453]: audit 2026-03-10T11:27:05.919517+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:09.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:09 vm07 bash[17804]: cluster 2026-03-10T11:27:07.778419+0000 mgr.y (mgr.24310) 65 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 7.1 KiB/s wr, 164 op/s 2026-03-10T11:27:09.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:09 vm07 bash[17804]: audit 2026-03-10T11:27:08.766957+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:09.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:09 vm07 bash[17804]: audit 2026-03-10T11:27:08.985720+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:09.790 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:09 vm05 bash[22470]: cluster 2026-03-10T11:27:07.778419+0000 mgr.y (mgr.24310) 65 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 7.1 KiB/s wr, 164 op/s 2026-03-10T11:27:09.790 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:09 vm05 bash[22470]: audit 2026-03-10T11:27:08.766957+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:09.790 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:09 vm05 bash[22470]: audit 2026-03-10T11:27:08.985720+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:09.790 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:09 vm05 bash[17453]: cluster 2026-03-10T11:27:07.778419+0000 mgr.y (mgr.24310) 65 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 74 KiB/s rd, 7.1 KiB/s wr, 164 op/s 2026-03-10T11:27:09.790 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:09 vm05 bash[17453]: audit 2026-03-10T11:27:08.766957+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:09.790 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:09 vm05 bash[17453]: audit 2026-03-10T11:27:08.985720+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.348 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: cephadm 2026-03-10T11:27:08.990542+0000 mgr.y (mgr.24310) 66 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.741326+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.752162+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.757799+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.770161+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.771377+0000 mon.b (mon.2) 79 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.773143+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:10 vm05 bash[17453]: audit 2026-03-10T11:27:09.780165+0000 mon.b (mon.2) 80 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.683 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.684 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.684 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.684 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.684 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.684 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.684 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:10 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: cephadm 2026-03-10T11:27:08.990542+0000 mgr.y (mgr.24310) 66 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.741326+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.752162+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.757799+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.770161+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.771377+0000 mon.b (mon.2) 79 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.773143+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T11:27:10.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:10 vm07 bash[17804]: audit 2026-03-10T11:27:09.780165+0000 mon.b (mon.2) 80 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:11.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: cephadm 2026-03-10T11:27:08.990542+0000 mgr.y (mgr.24310) 66 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.741326+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.752162+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.757799+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.770161+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.771377+0000 mon.b (mon.2) 79 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.773143+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-10T11:27:11.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:10 vm05 bash[22470]: audit 2026-03-10T11:27:09.780165+0000 mon.b (mon.2) 80 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:11.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: cephadm 2026-03-10T11:27:09.761778+0000 mgr.y (mgr.24310) 67 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: cluster 2026-03-10T11:27:09.778682+0000 mgr.y (mgr.24310) 68 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 56 KiB/s rd, 5.4 KiB/s wr, 123 op/s 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: cephadm 2026-03-10T11:27:09.780193+0000 mgr.y (mgr.24310) 69 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: audit 2026-03-10T11:27:11.085825+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: audit 2026-03-10T11:27:11.163726+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: audit 2026-03-10T11:27:11.164852+0000 mon.b (mon.2) 82 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:11 vm05 bash[22470]: audit 2026-03-10T11:27:11.165596+0000 mon.b (mon.2) 83 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: cephadm 2026-03-10T11:27:09.761778+0000 mgr.y (mgr.24310) 67 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: cluster 2026-03-10T11:27:09.778682+0000 mgr.y (mgr.24310) 68 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 56 KiB/s rd, 5.4 KiB/s wr, 123 op/s 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: cephadm 2026-03-10T11:27:09.780193+0000 mgr.y (mgr.24310) 69 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: audit 2026-03-10T11:27:11.085825+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: audit 2026-03-10T11:27:11.163726+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: audit 2026-03-10T11:27:11.164852+0000 mon.b (mon.2) 82 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:11.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:11 vm05 bash[17453]: audit 2026-03-10T11:27:11.165596+0000 mon.b (mon.2) 83 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:11 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: cephadm 2026-03-10T11:27:09.761778+0000 mgr.y (mgr.24310) 67 : cephadm [INF] Checking pool "foo" exists for service iscsi.foo 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: cluster 2026-03-10T11:27:09.778682+0000 mgr.y (mgr.24310) 68 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 56 KiB/s rd, 5.4 KiB/s wr, 123 op/s 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: cephadm 2026-03-10T11:27:09.780193+0000 mgr.y (mgr.24310) 69 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: audit 2026-03-10T11:27:11.085825+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: audit 2026-03-10T11:27:11.163726+0000 mon.b (mon.2) 81 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: audit 2026-03-10T11:27:11.164852+0000 mon.b (mon.2) 82 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:11.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:11 vm07 bash[17804]: audit 2026-03-10T11:27:11.165596+0000 mon.b (mon.2) 83 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: cluster 2026-03-10T11:27:11.591407+0000 mon.a (mon.0) 674 : cluster [DBG] mgrmap e21: y(active, since 56s), standbys: x 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: cluster 2026-03-10T11:27:11.778987+0000 mgr.y (mgr.24310) 70 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 4.0 KiB/s wr, 107 op/s 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: audit 2026-03-10T11:27:12.054338+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.105:0/3006992076' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: audit 2026-03-10T11:27:12.260107+0000 mon.c (mon.1) 27 : audit [INF] from='client.? 192.168.123.105:0/2614692264' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: audit 2026-03-10T11:27:12.260747+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: audit 2026-03-10T11:27:12.380668+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]': finished 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:12 vm05 bash[22470]: cluster 2026-03-10T11:27:12.380850+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: cluster 2026-03-10T11:27:11.591407+0000 mon.a (mon.0) 674 : cluster [DBG] mgrmap e21: y(active, since 56s), standbys: x 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: cluster 2026-03-10T11:27:11.778987+0000 mgr.y (mgr.24310) 70 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 4.0 KiB/s wr, 107 op/s 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: audit 2026-03-10T11:27:12.054338+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.105:0/3006992076' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: audit 2026-03-10T11:27:12.260107+0000 mon.c (mon.1) 27 : audit [INF] from='client.? 192.168.123.105:0/2614692264' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: audit 2026-03-10T11:27:12.260747+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]: dispatch 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: audit 2026-03-10T11:27:12.380668+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]': finished 2026-03-10T11:27:12.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:12 vm05 bash[17453]: cluster 2026-03-10T11:27:12.380850+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: cluster 2026-03-10T11:27:11.591407+0000 mon.a (mon.0) 674 : cluster [DBG] mgrmap e21: y(active, since 56s), standbys: x 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: cluster 2026-03-10T11:27:11.778987+0000 mgr.y (mgr.24310) 70 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 456 KiB data, 57 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 4.0 KiB/s wr, 107 op/s 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: audit 2026-03-10T11:27:12.054338+0000 mon.c (mon.1) 26 : audit [DBG] from='client.? 192.168.123.105:0/3006992076' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: audit 2026-03-10T11:27:12.260107+0000 mon.c (mon.1) 27 : audit [INF] from='client.? 192.168.123.105:0/2614692264' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]: dispatch 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: audit 2026-03-10T11:27:12.260747+0000 mon.a (mon.0) 675 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]: dispatch 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: audit 2026-03-10T11:27:12.380668+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3163341454"}]': finished 2026-03-10T11:27:12.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:12 vm07 bash[17804]: cluster 2026-03-10T11:27:12.380850+0000 mon.a (mon.0) 677 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-10T11:27:13.826 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:13 vm05 bash[17453]: audit 2026-03-10T11:27:12.579166+0000 mon.c (mon.1) 28 : audit [INF] from='client.? 192.168.123.105:0/2160226129' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]: dispatch 2026-03-10T11:27:13.826 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:13 vm05 bash[17453]: audit 2026-03-10T11:27:12.579916+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]: dispatch 2026-03-10T11:27:13.826 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:13 vm05 bash[17453]: audit 2026-03-10T11:27:13.400758+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]': finished 2026-03-10T11:27:13.826 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:13 vm05 bash[17453]: cluster 2026-03-10T11:27:13.400897+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T11:27:14.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:12.579166+0000 mon.c (mon.1) 28 : audit [INF] from='client.? 192.168.123.105:0/2160226129' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]: dispatch 2026-03-10T11:27:14.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:12.579916+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]: dispatch 2026-03-10T11:27:14.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:13.400758+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]': finished 2026-03-10T11:27:14.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: cluster 2026-03-10T11:27:13.400897+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T11:27:14.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:13 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:13] "GET /metrics HTTP/1.1" 200 197429 "" "Prometheus/2.33.4" 2026-03-10T11:27:14.307 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:13 vm07 bash[17804]: audit 2026-03-10T11:27:12.579166+0000 mon.c (mon.1) 28 : audit [INF] from='client.? 192.168.123.105:0/2160226129' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]: dispatch 2026-03-10T11:27:14.307 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:13 vm07 bash[17804]: audit 2026-03-10T11:27:12.579916+0000 mon.a (mon.0) 678 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]: dispatch 2026-03-10T11:27:14.307 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:13 vm07 bash[17804]: audit 2026-03-10T11:27:13.400758+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/467921525"}]': finished 2026-03-10T11:27:14.307 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:13 vm07 bash[17804]: cluster 2026-03-10T11:27:13.400897+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:13.628674+0000 mon.a (mon.0) 681 : audit [INF] from='client.? 192.168.123.105:0/1375727778' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2288453217"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: cluster 2026-03-10T11:27:13.779340+0000 mgr.y (mgr.24310) 71 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 144 KiB/s rd, 4.7 KiB/s wr, 271 op/s 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.473276+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.484208+0000 mon.a (mon.0) 683 : audit [INF] from='client.? 192.168.123.105:0/1375727778' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2288453217"}]': finished 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: cluster 2026-03-10T11:27:14.484359+0000 mon.a (mon.0) 684 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.485696+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.496909+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.504635+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.506449+0000 mon.b (mon.2) 85 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.511344+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.517460+0000 mon.b (mon.2) 86 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.520342+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.523635+0000 mon.b (mon.2) 87 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.524662+0000 mon.b (mon.2) 88 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.525274+0000 mon.b (mon.2) 89 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:14 vm05 bash[22470]: audit 2026-03-10T11:27:14.718802+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.105:0/2587232131' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/2589338318"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:13.628674+0000 mon.a (mon.0) 681 : audit [INF] from='client.? 192.168.123.105:0/1375727778' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2288453217"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: cluster 2026-03-10T11:27:13.779340+0000 mgr.y (mgr.24310) 71 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 144 KiB/s rd, 4.7 KiB/s wr, 271 op/s 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.473276+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.484208+0000 mon.a (mon.0) 683 : audit [INF] from='client.? 192.168.123.105:0/1375727778' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2288453217"}]': finished 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: cluster 2026-03-10T11:27:14.484359+0000 mon.a (mon.0) 684 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.485696+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.496909+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.504635+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.506449+0000 mon.b (mon.2) 85 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.511344+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.517460+0000 mon.b (mon.2) 86 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.520342+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.523635+0000 mon.b (mon.2) 87 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.524662+0000 mon.b (mon.2) 88 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.525274+0000 mon.b (mon.2) 89 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:15.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:14 vm05 bash[17453]: audit 2026-03-10T11:27:14.718802+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.105:0/2587232131' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/2589338318"}]: dispatch 2026-03-10T11:27:15.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:13.628674+0000 mon.a (mon.0) 681 : audit [INF] from='client.? 192.168.123.105:0/1375727778' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2288453217"}]: dispatch 2026-03-10T11:27:15.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: cluster 2026-03-10T11:27:13.779340+0000 mgr.y (mgr.24310) 71 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 144 KiB/s rd, 4.7 KiB/s wr, 271 op/s 2026-03-10T11:27:15.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.473276+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.484208+0000 mon.a (mon.0) 683 : audit [INF] from='client.? 192.168.123.105:0/1375727778' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2288453217"}]': finished 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: cluster 2026-03-10T11:27:14.484359+0000 mon.a (mon.0) 684 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.485696+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.496909+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.504635+0000 mon.b (mon.2) 84 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.506449+0000 mon.b (mon.2) 85 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.511344+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.517460+0000 mon.b (mon.2) 86 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.520342+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.523635+0000 mon.b (mon.2) 87 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.524662+0000 mon.b (mon.2) 88 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.525274+0000 mon.b (mon.2) 89 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:27:15.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:14 vm07 bash[17804]: audit 2026-03-10T11:27:14.718802+0000 mon.a (mon.0) 689 : audit [INF] from='client.? 192.168.123.105:0/2587232131' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/2589338318"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:14.503624+0000 mgr.y (mgr.24310) 72 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: cephadm 2026-03-10T11:27:14.504635+0000 mgr.y (mgr.24310) 73 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:14.505127+0000 mgr.y (mgr.24310) 74 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:14.516276+0000 mgr.y (mgr.24310) 75 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: cephadm 2026-03-10T11:27:14.526058+0000 mgr.y (mgr.24310) 76 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:14.946418+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.525213+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.105:0/2587232131' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/2589338318"}]': finished 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: cluster 2026-03-10T11:27:15.525287+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.710375+0000 mon.c (mon.1) 29 : audit [INF] from='client.? 192.168.123.105:0/2248985768' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.710985+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.837006+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.837297+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.837544+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.837767+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.838067+0000 mon.b (mon.2) 90 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.838414+0000 mon.b (mon.2) 91 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.838699+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.838967+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.884292+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.885414+0000 mon.b (mon.2) 94 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.920862+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:27:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.922032+0000 mon.b (mon.2) 95 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:15 vm05 bash[17453]: audit 2026-03-10T11:27:15.928318+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:14.503624+0000 mgr.y (mgr.24310) 72 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: cephadm 2026-03-10T11:27:14.504635+0000 mgr.y (mgr.24310) 73 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:14.505127+0000 mgr.y (mgr.24310) 74 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:14.516276+0000 mgr.y (mgr.24310) 75 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: cephadm 2026-03-10T11:27:14.526058+0000 mgr.y (mgr.24310) 76 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:14.946418+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.525213+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.105:0/2587232131' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/2589338318"}]': finished 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: cluster 2026-03-10T11:27:15.525287+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.710375+0000 mon.c (mon.1) 29 : audit [INF] from='client.? 192.168.123.105:0/2248985768' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.710985+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.837006+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.837297+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.837544+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.837767+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.838067+0000 mon.b (mon.2) 90 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.838414+0000 mon.b (mon.2) 91 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.838699+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.838967+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.884292+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.885414+0000 mon.b (mon.2) 94 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.920862+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.922032+0000 mon.b (mon.2) 95 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:27:16.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:15 vm05 bash[22470]: audit 2026-03-10T11:27:15.928318+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:14.503624+0000 mgr.y (mgr.24310) 72 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: cephadm 2026-03-10T11:27:14.504635+0000 mgr.y (mgr.24310) 73 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:14.505127+0000 mgr.y (mgr.24310) 74 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:14.516276+0000 mgr.y (mgr.24310) 75 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: cephadm 2026-03-10T11:27:14.526058+0000 mgr.y (mgr.24310) 76 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:14.946418+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.525213+0000 mon.a (mon.0) 691 : audit [INF] from='client.? 192.168.123.105:0/2587232131' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/2589338318"}]': finished 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: cluster 2026-03-10T11:27:15.525287+0000 mon.a (mon.0) 692 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.710375+0000 mon.c (mon.1) 29 : audit [INF] from='client.? 192.168.123.105:0/2248985768' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.710985+0000 mon.a (mon.0) 693 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.837006+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.837297+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.837544+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.837767+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.838067+0000 mon.b (mon.2) 90 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.838414+0000 mon.b (mon.2) 91 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.838699+0000 mon.b (mon.2) 92 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.838967+0000 mon.b (mon.2) 93 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.884292+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.885414+0000 mon.b (mon.2) 94 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.920862+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.922032+0000 mon.b (mon.2) 95 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:27:16.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:15 vm07 bash[17804]: audit 2026-03-10T11:27:15.928318+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: cluster 2026-03-10T11:27:15.779776+0000 mgr.y (mgr.24310) 77 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 178 KiB/s rd, 511 B/s wr, 300 op/s 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: audit 2026-03-10T11:27:16.570712+0000 mon.a (mon.0) 701 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: audit 2026-03-10T11:27:16.570822+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: audit 2026-03-10T11:27:16.570898+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: audit 2026-03-10T11:27:16.570971+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: audit 2026-03-10T11:27:16.571047+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: cluster 2026-03-10T11:27:16.571127+0000 mon.a (mon.0) 706 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:16 vm05 bash[22470]: audit 2026-03-10T11:27:16.766134+0000 mon.a (mon.0) 707 : audit [INF] from='client.? 192.168.123.105:0/1906160228' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1312851658"}]: dispatch 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: cluster 2026-03-10T11:27:15.779776+0000 mgr.y (mgr.24310) 77 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 178 KiB/s rd, 511 B/s wr, 300 op/s 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: audit 2026-03-10T11:27:16.570712+0000 mon.a (mon.0) 701 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: audit 2026-03-10T11:27:16.570822+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: audit 2026-03-10T11:27:16.570898+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: audit 2026-03-10T11:27:16.570971+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: audit 2026-03-10T11:27:16.571047+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]': finished 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: cluster 2026-03-10T11:27:16.571127+0000 mon.a (mon.0) 706 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T11:27:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:16 vm05 bash[17453]: audit 2026-03-10T11:27:16.766134+0000 mon.a (mon.0) 707 : audit [INF] from='client.? 192.168.123.105:0/1906160228' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1312851658"}]: dispatch 2026-03-10T11:27:17.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: cluster 2026-03-10T11:27:15.779776+0000 mgr.y (mgr.24310) 77 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 63 MiB used, 160 GiB / 160 GiB avail; 178 KiB/s rd, 511 B/s wr, 300 op/s 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: audit 2026-03-10T11:27:16.570712+0000 mon.a (mon.0) 701 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1118298400"}]': finished 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: audit 2026-03-10T11:27:16.570822+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1f", "id": [7, 2]}]': finished 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: audit 2026-03-10T11:27:16.570898+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.a", "id": [1, 2]}]': finished 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: audit 2026-03-10T11:27:16.570971+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.f", "id": [1, 2]}]': finished 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: audit 2026-03-10T11:27:16.571047+0000 mon.a (mon.0) 705 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.14", "id": [1, 5]}]': finished 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: cluster 2026-03-10T11:27:16.571127+0000 mon.a (mon.0) 706 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-10T11:27:17.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:16 vm07 bash[17804]: audit 2026-03-10T11:27:16.766134+0000 mon.a (mon.0) 707 : audit [INF] from='client.? 192.168.123.105:0/1906160228' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1312851658"}]: dispatch 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:18 vm05 bash[17453]: audit 2026-03-10T11:27:17.576849+0000 mon.a (mon.0) 708 : audit [INF] from='client.? 192.168.123.105:0/1906160228' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1312851658"}]': finished 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:18 vm05 bash[17453]: cluster 2026-03-10T11:27:17.576919+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:18 vm05 bash[17453]: audit 2026-03-10T11:27:17.772204+0000 mon.c (mon.1) 30 : audit [INF] from='client.? 192.168.123.105:0/3001967127' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]: dispatch 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:18 vm05 bash[17453]: audit 2026-03-10T11:27:17.772917+0000 mon.a (mon.0) 710 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]: dispatch 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:18 vm05 bash[17453]: cluster 2026-03-10T11:27:17.780178+0000 mgr.y (mgr.24310) 78 : cluster [DBG] pgmap v55: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail; 9.2 KiB/s rd, 255 B/s wr, 10 op/s 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:18 vm05 bash[22470]: audit 2026-03-10T11:27:17.576849+0000 mon.a (mon.0) 708 : audit [INF] from='client.? 192.168.123.105:0/1906160228' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1312851658"}]': finished 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:18 vm05 bash[22470]: cluster 2026-03-10T11:27:17.576919+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:18 vm05 bash[22470]: audit 2026-03-10T11:27:17.772204+0000 mon.c (mon.1) 30 : audit [INF] from='client.? 192.168.123.105:0/3001967127' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]: dispatch 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:18 vm05 bash[22470]: audit 2026-03-10T11:27:17.772917+0000 mon.a (mon.0) 710 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]: dispatch 2026-03-10T11:27:18.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:18 vm05 bash[22470]: cluster 2026-03-10T11:27:17.780178+0000 mgr.y (mgr.24310) 78 : cluster [DBG] pgmap v55: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail; 9.2 KiB/s rd, 255 B/s wr, 10 op/s 2026-03-10T11:27:18.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:18 vm07 bash[17804]: audit 2026-03-10T11:27:17.576849+0000 mon.a (mon.0) 708 : audit [INF] from='client.? 192.168.123.105:0/1906160228' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1312851658"}]': finished 2026-03-10T11:27:18.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:18 vm07 bash[17804]: cluster 2026-03-10T11:27:17.576919+0000 mon.a (mon.0) 709 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-10T11:27:18.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:18 vm07 bash[17804]: audit 2026-03-10T11:27:17.772204+0000 mon.c (mon.1) 30 : audit [INF] from='client.? 192.168.123.105:0/3001967127' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]: dispatch 2026-03-10T11:27:18.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:18 vm07 bash[17804]: audit 2026-03-10T11:27:17.772917+0000 mon.a (mon.0) 710 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]: dispatch 2026-03-10T11:27:18.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:18 vm07 bash[17804]: cluster 2026-03-10T11:27:17.780178+0000 mgr.y (mgr.24310) 78 : cluster [DBG] pgmap v55: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail; 9.2 KiB/s rd, 255 B/s wr, 10 op/s 2026-03-10T11:27:19.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:19 vm07 bash[17804]: audit 2026-03-10T11:27:18.598936+0000 mon.a (mon.0) 711 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]': finished 2026-03-10T11:27:19.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:19 vm07 bash[17804]: cluster 2026-03-10T11:27:18.599006+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T11:27:19.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:19 vm07 bash[17804]: audit 2026-03-10T11:27:18.801057+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.105:0/1596399169' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/2589338318"}]: dispatch 2026-03-10T11:27:20.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:19 vm05 bash[22470]: audit 2026-03-10T11:27:18.598936+0000 mon.a (mon.0) 711 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]': finished 2026-03-10T11:27:20.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:19 vm05 bash[22470]: cluster 2026-03-10T11:27:18.599006+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T11:27:20.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:19 vm05 bash[22470]: audit 2026-03-10T11:27:18.801057+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.105:0/1596399169' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/2589338318"}]: dispatch 2026-03-10T11:27:20.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:19 vm05 bash[17453]: audit 2026-03-10T11:27:18.598936+0000 mon.a (mon.0) 711 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1110057132"}]': finished 2026-03-10T11:27:20.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:19 vm05 bash[17453]: cluster 2026-03-10T11:27:18.599006+0000 mon.a (mon.0) 712 : cluster [DBG] osdmap e72: 8 total, 8 up, 8 in 2026-03-10T11:27:20.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:19 vm05 bash[17453]: audit 2026-03-10T11:27:18.801057+0000 mon.a (mon.0) 713 : audit [INF] from='client.? 192.168.123.105:0/1596399169' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/2589338318"}]: dispatch 2026-03-10T11:27:20.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:20 vm07 bash[17804]: audit 2026-03-10T11:27:19.601095+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.105:0/1596399169' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/2589338318"}]': finished 2026-03-10T11:27:20.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:20 vm07 bash[17804]: cluster 2026-03-10T11:27:19.604029+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T11:27:20.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:20 vm07 bash[17804]: cluster 2026-03-10T11:27:19.780506+0000 mgr.y (mgr.24310) 79 : cluster [DBG] pgmap v58: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail; 9.2 KiB/s rd, 255 B/s wr, 10 op/s 2026-03-10T11:27:20.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:20 vm07 bash[17804]: audit 2026-03-10T11:27:19.821221+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.105:0/1263023594' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1110057132"}]: dispatch 2026-03-10T11:27:21.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:20 vm05 bash[22470]: audit 2026-03-10T11:27:19.601095+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.105:0/1596399169' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/2589338318"}]': finished 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:20 vm05 bash[22470]: cluster 2026-03-10T11:27:19.604029+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:20 vm05 bash[22470]: cluster 2026-03-10T11:27:19.780506+0000 mgr.y (mgr.24310) 79 : cluster [DBG] pgmap v58: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail; 9.2 KiB/s rd, 255 B/s wr, 10 op/s 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:20 vm05 bash[22470]: audit 2026-03-10T11:27:19.821221+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.105:0/1263023594' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1110057132"}]: dispatch 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:20 vm05 bash[17453]: audit 2026-03-10T11:27:19.601095+0000 mon.a (mon.0) 714 : audit [INF] from='client.? 192.168.123.105:0/1596399169' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/2589338318"}]': finished 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:20 vm05 bash[17453]: cluster 2026-03-10T11:27:19.604029+0000 mon.a (mon.0) 715 : cluster [DBG] osdmap e73: 8 total, 8 up, 8 in 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:20 vm05 bash[17453]: cluster 2026-03-10T11:27:19.780506+0000 mgr.y (mgr.24310) 79 : cluster [DBG] pgmap v58: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail; 9.2 KiB/s rd, 255 B/s wr, 10 op/s 2026-03-10T11:27:21.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:20 vm05 bash[17453]: audit 2026-03-10T11:27:19.821221+0000 mon.a (mon.0) 716 : audit [INF] from='client.? 192.168.123.105:0/1263023594' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1110057132"}]: dispatch 2026-03-10T11:27:21.895 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:21 vm05 bash[22470]: audit 2026-03-10T11:27:20.628833+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.105:0/1263023594' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1110057132"}]': finished 2026-03-10T11:27:21.895 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:21 vm05 bash[22470]: cluster 2026-03-10T11:27:20.628971+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T11:27:21.895 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:21 vm05 bash[22470]: audit 2026-03-10T11:27:20.822859+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.105:0/2784075696' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3902952517"}]: dispatch 2026-03-10T11:27:21.895 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:21 vm05 bash[17453]: audit 2026-03-10T11:27:20.628833+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.105:0/1263023594' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1110057132"}]': finished 2026-03-10T11:27:21.895 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:21 vm05 bash[17453]: cluster 2026-03-10T11:27:20.628971+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T11:27:21.895 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:21 vm05 bash[17453]: audit 2026-03-10T11:27:20.822859+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.105:0/2784075696' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3902952517"}]: dispatch 2026-03-10T11:27:21.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:21 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:21] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:27:21.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:21 vm07 bash[17804]: audit 2026-03-10T11:27:20.628833+0000 mon.a (mon.0) 717 : audit [INF] from='client.? 192.168.123.105:0/1263023594' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1110057132"}]': finished 2026-03-10T11:27:21.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:21 vm07 bash[17804]: cluster 2026-03-10T11:27:20.628971+0000 mon.a (mon.0) 718 : cluster [DBG] osdmap e74: 8 total, 8 up, 8 in 2026-03-10T11:27:21.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:21 vm07 bash[17804]: audit 2026-03-10T11:27:20.822859+0000 mon.a (mon.0) 719 : audit [INF] from='client.? 192.168.123.105:0/2784075696' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3902952517"}]: dispatch 2026-03-10T11:27:23.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[17453]: audit 2026-03-10T11:27:21.642763+0000 mon.a (mon.0) 720 : audit [INF] from='client.? 192.168.123.105:0/2784075696' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3902952517"}]': finished 2026-03-10T11:27:23.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[17453]: cluster 2026-03-10T11:27:21.642879+0000 mon.a (mon.0) 721 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T11:27:23.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[17453]: cluster 2026-03-10T11:27:21.780878+0000 mgr.y (mgr.24310) 80 : cluster [DBG] pgmap v61: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[17453]: audit 2026-03-10T11:27:21.826451+0000 mon.c (mon.1) 31 : audit [INF] from='client.? 192.168.123.105:0/1710247477' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]: dispatch 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[17453]: audit 2026-03-10T11:27:21.826731+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]: dispatch 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[17453]: audit 2026-03-10T11:27:21.891982+0000 mgr.y (mgr.24310) 81 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:23 vm05 bash[22470]: audit 2026-03-10T11:27:21.642763+0000 mon.a (mon.0) 720 : audit [INF] from='client.? 192.168.123.105:0/2784075696' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3902952517"}]': finished 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:23 vm05 bash[22470]: cluster 2026-03-10T11:27:21.642879+0000 mon.a (mon.0) 721 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:23 vm05 bash[22470]: cluster 2026-03-10T11:27:21.780878+0000 mgr.y (mgr.24310) 80 : cluster [DBG] pgmap v61: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:23 vm05 bash[22470]: audit 2026-03-10T11:27:21.826451+0000 mon.c (mon.1) 31 : audit [INF] from='client.? 192.168.123.105:0/1710247477' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]: dispatch 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:23 vm05 bash[22470]: audit 2026-03-10T11:27:21.826731+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]: dispatch 2026-03-10T11:27:23.349 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:23 vm05 bash[22470]: audit 2026-03-10T11:27:21.891982+0000 mgr.y (mgr.24310) 81 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:23.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:23 vm07 bash[17804]: audit 2026-03-10T11:27:21.642763+0000 mon.a (mon.0) 720 : audit [INF] from='client.? 192.168.123.105:0/2784075696' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3902952517"}]': finished 2026-03-10T11:27:23.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:23 vm07 bash[17804]: cluster 2026-03-10T11:27:21.642879+0000 mon.a (mon.0) 721 : cluster [DBG] osdmap e75: 8 total, 8 up, 8 in 2026-03-10T11:27:23.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:23 vm07 bash[17804]: cluster 2026-03-10T11:27:21.780878+0000 mgr.y (mgr.24310) 80 : cluster [DBG] pgmap v61: 161 pgs: 4 peering, 157 active+clean; 457 KiB data, 64 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:27:23.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:23 vm07 bash[17804]: audit 2026-03-10T11:27:21.826451+0000 mon.c (mon.1) 31 : audit [INF] from='client.? 192.168.123.105:0/1710247477' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]: dispatch 2026-03-10T11:27:23.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:23 vm07 bash[17804]: audit 2026-03-10T11:27:21.826731+0000 mon.a (mon.0) 722 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]: dispatch 2026-03-10T11:27:23.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:23 vm07 bash[17804]: audit 2026-03-10T11:27:21.891982+0000 mgr.y (mgr.24310) 81 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:23.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:23.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:27:23.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:23.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:27:24.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:24 vm05 bash[22470]: audit 2026-03-10T11:27:23.014309+0000 mon.a (mon.0) 723 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]': finished 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:24 vm05 bash[22470]: cluster 2026-03-10T11:27:23.014431+0000 mon.a (mon.0) 724 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:24 vm05 bash[22470]: audit 2026-03-10T11:27:23.203119+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]: dispatch 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:24 vm05 bash[22470]: audit 2026-03-10T11:27:23.204069+0000 mon.b (mon.2) 96 : audit [INF] from='client.? 192.168.123.105:0/1466197313' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]: dispatch 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:24 vm05 bash[17453]: audit 2026-03-10T11:27:23.014309+0000 mon.a (mon.0) 723 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]': finished 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:24 vm05 bash[17453]: cluster 2026-03-10T11:27:23.014431+0000 mon.a (mon.0) 724 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:24 vm05 bash[17453]: audit 2026-03-10T11:27:23.203119+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]: dispatch 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:24 vm05 bash[17453]: audit 2026-03-10T11:27:23.204069+0000 mon.b (mon.2) 96 : audit [INF] from='client.? 192.168.123.105:0/1466197313' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]: dispatch 2026-03-10T11:27:24.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:23] "GET /metrics HTTP/1.1" 200 207627 "" "Prometheus/2.33.4" 2026-03-10T11:27:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:24 vm07 bash[17804]: audit 2026-03-10T11:27:23.014309+0000 mon.a (mon.0) 723 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3473116901"}]': finished 2026-03-10T11:27:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:24 vm07 bash[17804]: cluster 2026-03-10T11:27:23.014431+0000 mon.a (mon.0) 724 : cluster [DBG] osdmap e76: 8 total, 8 up, 8 in 2026-03-10T11:27:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:24 vm07 bash[17804]: audit 2026-03-10T11:27:23.203119+0000 mon.a (mon.0) 725 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]: dispatch 2026-03-10T11:27:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:24 vm07 bash[17804]: audit 2026-03-10T11:27:23.204069+0000 mon.b (mon.2) 96 : audit [INF] from='client.? 192.168.123.105:0/1466197313' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]: dispatch 2026-03-10T11:27:25.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:25 vm05 bash[22470]: cluster 2026-03-10T11:27:23.781326+0000 mgr.y (mgr.24310) 82 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 65 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 238 B/s, 0 objects/s recovering 2026-03-10T11:27:25.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:25 vm05 bash[22470]: audit 2026-03-10T11:27:24.085268+0000 mon.a (mon.0) 726 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]': finished 2026-03-10T11:27:25.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:25 vm05 bash[22470]: cluster 2026-03-10T11:27:24.085291+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T11:27:25.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:25 vm05 bash[22470]: audit 2026-03-10T11:27:24.274671+0000 mon.a (mon.0) 728 : audit [INF] from='client.? 192.168.123.105:0/122358636' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4010853674"}]: dispatch 2026-03-10T11:27:25.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:25 vm05 bash[17453]: cluster 2026-03-10T11:27:23.781326+0000 mgr.y (mgr.24310) 82 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 65 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 238 B/s, 0 objects/s recovering 2026-03-10T11:27:25.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:25 vm05 bash[17453]: audit 2026-03-10T11:27:24.085268+0000 mon.a (mon.0) 726 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]': finished 2026-03-10T11:27:25.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:25 vm05 bash[17453]: cluster 2026-03-10T11:27:24.085291+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T11:27:25.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:25 vm05 bash[17453]: audit 2026-03-10T11:27:24.274671+0000 mon.a (mon.0) 728 : audit [INF] from='client.? 192.168.123.105:0/122358636' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4010853674"}]: dispatch 2026-03-10T11:27:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:25 vm07 bash[17804]: cluster 2026-03-10T11:27:23.781326+0000 mgr.y (mgr.24310) 82 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 65 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 238 B/s, 0 objects/s recovering 2026-03-10T11:27:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:25 vm07 bash[17804]: audit 2026-03-10T11:27:24.085268+0000 mon.a (mon.0) 726 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1953728704"}]': finished 2026-03-10T11:27:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:25 vm07 bash[17804]: cluster 2026-03-10T11:27:24.085291+0000 mon.a (mon.0) 727 : cluster [DBG] osdmap e77: 8 total, 8 up, 8 in 2026-03-10T11:27:25.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:25 vm07 bash[17804]: audit 2026-03-10T11:27:24.274671+0000 mon.a (mon.0) 728 : audit [INF] from='client.? 192.168.123.105:0/122358636' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4010853674"}]: dispatch 2026-03-10T11:27:26.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:26 vm07 bash[17804]: audit 2026-03-10T11:27:25.087431+0000 mon.a (mon.0) 729 : audit [INF] from='client.? 192.168.123.105:0/122358636' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4010853674"}]': finished 2026-03-10T11:27:26.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:26 vm07 bash[17804]: cluster 2026-03-10T11:27:25.089177+0000 mon.a (mon.0) 730 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T11:27:26.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:26 vm07 bash[17804]: audit 2026-03-10T11:27:25.297328+0000 mon.c (mon.1) 32 : audit [INF] from='client.? 192.168.123.105:0/1847534948' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]: dispatch 2026-03-10T11:27:26.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:26 vm07 bash[17804]: audit 2026-03-10T11:27:25.297949+0000 mon.a (mon.0) 731 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]: dispatch 2026-03-10T11:27:26.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:26 vm05 bash[22470]: audit 2026-03-10T11:27:25.087431+0000 mon.a (mon.0) 729 : audit [INF] from='client.? 192.168.123.105:0/122358636' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4010853674"}]': finished 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:26 vm05 bash[22470]: cluster 2026-03-10T11:27:25.089177+0000 mon.a (mon.0) 730 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:26 vm05 bash[22470]: audit 2026-03-10T11:27:25.297328+0000 mon.c (mon.1) 32 : audit [INF] from='client.? 192.168.123.105:0/1847534948' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]: dispatch 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:26 vm05 bash[22470]: audit 2026-03-10T11:27:25.297949+0000 mon.a (mon.0) 731 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]: dispatch 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:26 vm05 bash[17453]: audit 2026-03-10T11:27:25.087431+0000 mon.a (mon.0) 729 : audit [INF] from='client.? 192.168.123.105:0/122358636' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4010853674"}]': finished 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:26 vm05 bash[17453]: cluster 2026-03-10T11:27:25.089177+0000 mon.a (mon.0) 730 : cluster [DBG] osdmap e78: 8 total, 8 up, 8 in 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:26 vm05 bash[17453]: audit 2026-03-10T11:27:25.297328+0000 mon.c (mon.1) 32 : audit [INF] from='client.? 192.168.123.105:0/1847534948' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]: dispatch 2026-03-10T11:27:26.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:26 vm05 bash[17453]: audit 2026-03-10T11:27:25.297949+0000 mon.a (mon.0) 731 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]: dispatch 2026-03-10T11:27:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:27 vm07 bash[17804]: cluster 2026-03-10T11:27:25.781635+0000 mgr.y (mgr.24310) 83 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 65 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 241 B/s, 0 objects/s recovering 2026-03-10T11:27:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:27 vm07 bash[17804]: audit 2026-03-10T11:27:26.090127+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]': finished 2026-03-10T11:27:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:27 vm07 bash[17804]: cluster 2026-03-10T11:27:26.090155+0000 mon.a (mon.0) 733 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T11:27:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:27 vm07 bash[17804]: audit 2026-03-10T11:27:26.287524+0000 mon.a (mon.0) 734 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]: dispatch 2026-03-10T11:27:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:27 vm07 bash[17804]: audit 2026-03-10T11:27:26.288395+0000 mon.b (mon.2) 97 : audit [INF] from='client.? 192.168.123.105:0/245691055' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]: dispatch 2026-03-10T11:27:27.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:27 vm05 bash[22470]: cluster 2026-03-10T11:27:25.781635+0000 mgr.y (mgr.24310) 83 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 65 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 241 B/s, 0 objects/s recovering 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:27 vm05 bash[22470]: audit 2026-03-10T11:27:26.090127+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]': finished 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:27 vm05 bash[22470]: cluster 2026-03-10T11:27:26.090155+0000 mon.a (mon.0) 733 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:27 vm05 bash[22470]: audit 2026-03-10T11:27:26.287524+0000 mon.a (mon.0) 734 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]: dispatch 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:27 vm05 bash[22470]: audit 2026-03-10T11:27:26.288395+0000 mon.b (mon.2) 97 : audit [INF] from='client.? 192.168.123.105:0/245691055' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]: dispatch 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:27 vm05 bash[17453]: cluster 2026-03-10T11:27:25.781635+0000 mgr.y (mgr.24310) 83 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 65 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s; 241 B/s, 0 objects/s recovering 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:27 vm05 bash[17453]: audit 2026-03-10T11:27:26.090127+0000 mon.a (mon.0) 732 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2723537270"}]': finished 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:27 vm05 bash[17453]: cluster 2026-03-10T11:27:26.090155+0000 mon.a (mon.0) 733 : cluster [DBG] osdmap e79: 8 total, 8 up, 8 in 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:27 vm05 bash[17453]: audit 2026-03-10T11:27:26.287524+0000 mon.a (mon.0) 734 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]: dispatch 2026-03-10T11:27:27.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:27 vm05 bash[17453]: audit 2026-03-10T11:27:26.288395+0000 mon.b (mon.2) 97 : audit [INF] from='client.? 192.168.123.105:0/245691055' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]: dispatch 2026-03-10T11:27:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:28 vm07 bash[17804]: audit 2026-03-10T11:27:27.117388+0000 mon.a (mon.0) 735 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]': finished 2026-03-10T11:27:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:28 vm07 bash[17804]: cluster 2026-03-10T11:27:27.117443+0000 mon.a (mon.0) 736 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T11:27:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:28 vm07 bash[17804]: audit 2026-03-10T11:27:27.298347+0000 mon.c (mon.1) 33 : audit [INF] from='client.? 192.168.123.105:0/4016031770' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]: dispatch 2026-03-10T11:27:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:28 vm07 bash[17804]: audit 2026-03-10T11:27:27.298711+0000 mon.a (mon.0) 737 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]: dispatch 2026-03-10T11:27:28.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:28 vm05 bash[22470]: audit 2026-03-10T11:27:27.117388+0000 mon.a (mon.0) 735 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]': finished 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:28 vm05 bash[22470]: cluster 2026-03-10T11:27:27.117443+0000 mon.a (mon.0) 736 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:28 vm05 bash[22470]: audit 2026-03-10T11:27:27.298347+0000 mon.c (mon.1) 33 : audit [INF] from='client.? 192.168.123.105:0/4016031770' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]: dispatch 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:28 vm05 bash[22470]: audit 2026-03-10T11:27:27.298711+0000 mon.a (mon.0) 737 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]: dispatch 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:28 vm05 bash[17453]: audit 2026-03-10T11:27:27.117388+0000 mon.a (mon.0) 735 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3538663775"}]': finished 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:28 vm05 bash[17453]: cluster 2026-03-10T11:27:27.117443+0000 mon.a (mon.0) 736 : cluster [DBG] osdmap e80: 8 total, 8 up, 8 in 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:28 vm05 bash[17453]: audit 2026-03-10T11:27:27.298347+0000 mon.c (mon.1) 33 : audit [INF] from='client.? 192.168.123.105:0/4016031770' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]: dispatch 2026-03-10T11:27:28.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:28 vm05 bash[17453]: audit 2026-03-10T11:27:27.298711+0000 mon.a (mon.0) 737 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]: dispatch 2026-03-10T11:27:29.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:29 vm07 bash[17804]: cluster 2026-03-10T11:27:27.781930+0000 mgr.y (mgr.24310) 84 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:29.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:29 vm07 bash[17804]: audit 2026-03-10T11:27:28.119692+0000 mon.a (mon.0) 738 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]': finished 2026-03-10T11:27:29.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:29 vm07 bash[17804]: cluster 2026-03-10T11:27:28.119823+0000 mon.a (mon.0) 739 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T11:27:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:29 vm05 bash[17453]: cluster 2026-03-10T11:27:27.781930+0000 mgr.y (mgr.24310) 84 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:29 vm05 bash[17453]: audit 2026-03-10T11:27:28.119692+0000 mon.a (mon.0) 738 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]': finished 2026-03-10T11:27:29.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:29 vm05 bash[17453]: cluster 2026-03-10T11:27:28.119823+0000 mon.a (mon.0) 739 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T11:27:29.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:29 vm05 bash[22470]: cluster 2026-03-10T11:27:27.781930+0000 mgr.y (mgr.24310) 84 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:29.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:29 vm05 bash[22470]: audit 2026-03-10T11:27:28.119692+0000 mon.a (mon.0) 738 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1953728704"}]': finished 2026-03-10T11:27:29.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:29 vm05 bash[22470]: cluster 2026-03-10T11:27:28.119823+0000 mon.a (mon.0) 739 : cluster [DBG] osdmap e81: 8 total, 8 up, 8 in 2026-03-10T11:27:31.575 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:31 vm07 bash[17804]: cluster 2026-03-10T11:27:29.782304+0000 mgr.y (mgr.24310) 85 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:27:31.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:31 vm05 bash[22470]: cluster 2026-03-10T11:27:29.782304+0000 mgr.y (mgr.24310) 85 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:27:31.598 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:31 vm05 bash[17453]: cluster 2026-03-10T11:27:29.782304+0000 mgr.y (mgr.24310) 85 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:27:31.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:31 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:31] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:27:33.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:33 vm05 bash[22470]: cluster 2026-03-10T11:27:31.782782+0000 mgr.y (mgr.24310) 86 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:33.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:33 vm05 bash[22470]: audit 2026-03-10T11:27:31.900278+0000 mgr.y (mgr.24310) 87 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:33.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:33 vm05 bash[17453]: cluster 2026-03-10T11:27:31.782782+0000 mgr.y (mgr.24310) 86 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:33.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:33 vm05 bash[17453]: audit 2026-03-10T11:27:31.900278+0000 mgr.y (mgr.24310) 87 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:33.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:33 vm05 bash[42794]: level=error ts=2026-03-10T11:27:33.504Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:27:33.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:33.506Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:27:33.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:33.506Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:27:33.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:33 vm07 bash[17804]: cluster 2026-03-10T11:27:31.782782+0000 mgr.y (mgr.24310) 86 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 66 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:33.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:33 vm07 bash[17804]: audit 2026-03-10T11:27:31.900278+0000 mgr.y (mgr.24310) 87 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:34.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:33] "GET /metrics HTTP/1.1" 200 207665 "" "Prometheus/2.33.4" 2026-03-10T11:27:34.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:34 vm07 bash[17804]: cluster 2026-03-10T11:27:33.783344+0000 mgr.y (mgr.24310) 88 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:27:34.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:34 vm05 bash[22470]: cluster 2026-03-10T11:27:33.783344+0000 mgr.y (mgr.24310) 88 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:27:34.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:34 vm05 bash[17453]: cluster 2026-03-10T11:27:33.783344+0000 mgr.y (mgr.24310) 88 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:27:37.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:36 vm07 bash[17804]: cluster 2026-03-10T11:27:35.783696+0000 mgr.y (mgr.24310) 89 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 590 B/s rd, 0 op/s 2026-03-10T11:27:37.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:36 vm05 bash[22470]: cluster 2026-03-10T11:27:35.783696+0000 mgr.y (mgr.24310) 89 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 590 B/s rd, 0 op/s 2026-03-10T11:27:37.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:36 vm05 bash[17453]: cluster 2026-03-10T11:27:35.783696+0000 mgr.y (mgr.24310) 89 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 590 B/s rd, 0 op/s 2026-03-10T11:27:39.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:39 vm07 bash[17804]: cluster 2026-03-10T11:27:37.784355+0000 mgr.y (mgr.24310) 90 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:27:39.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:39 vm05 bash[22470]: cluster 2026-03-10T11:27:37.784355+0000 mgr.y (mgr.24310) 90 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:27:39.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:39 vm05 bash[17453]: cluster 2026-03-10T11:27:37.784355+0000 mgr.y (mgr.24310) 90 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:27:41.698 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:41 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:27:41.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:41 vm07 bash[17804]: cluster 2026-03-10T11:27:39.784711+0000 mgr.y (mgr.24310) 91 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T11:27:41.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:41 vm05 bash[17453]: cluster 2026-03-10T11:27:39.784711+0000 mgr.y (mgr.24310) 91 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T11:27:41.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:41 vm05 bash[22470]: cluster 2026-03-10T11:27:39.784711+0000 mgr.y (mgr.24310) 91 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 877 B/s rd, 0 op/s 2026-03-10T11:27:42.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:42 vm05 bash[17453]: cluster 2026-03-10T11:27:41.785038+0000 mgr.y (mgr.24310) 92 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:42.848 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:42 vm05 bash[17453]: audit 2026-03-10T11:27:41.910033+0000 mgr.y (mgr.24310) 93 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:42.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:42 vm05 bash[22470]: cluster 2026-03-10T11:27:41.785038+0000 mgr.y (mgr.24310) 92 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:42.848 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:42 vm05 bash[22470]: audit 2026-03-10T11:27:41.910033+0000 mgr.y (mgr.24310) 93 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:42.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:42 vm07 bash[17804]: cluster 2026-03-10T11:27:41.785038+0000 mgr.y (mgr.24310) 92 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:42.949 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:42 vm07 bash[17804]: audit 2026-03-10T11:27:41.910033+0000 mgr.y (mgr.24310) 93 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:43.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:43 vm05 bash[42794]: level=error ts=2026-03-10T11:27:43.505Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:27:43.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:43.507Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:27:43.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:43.507Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:27:44.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:43] "GET /metrics HTTP/1.1" 200 207665 "" "Prometheus/2.33.4" 2026-03-10T11:27:45.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:44 vm05 bash[22470]: cluster 2026-03-10T11:27:43.785689+0000 mgr.y (mgr.24310) 94 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:45.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:44 vm05 bash[17453]: cluster 2026-03-10T11:27:43.785689+0000 mgr.y (mgr.24310) 94 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:45.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:44 vm07 bash[17804]: cluster 2026-03-10T11:27:43.785689+0000 mgr.y (mgr.24310) 94 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:47.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:47 vm05 bash[22470]: cluster 2026-03-10T11:27:45.785958+0000 mgr.y (mgr.24310) 95 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:47.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:46 vm05 bash[17453]: cluster 2026-03-10T11:27:45.785958+0000 mgr.y (mgr.24310) 95 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:47.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:47 vm07 bash[17804]: cluster 2026-03-10T11:27:45.785958+0000 mgr.y (mgr.24310) 95 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:48.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:48 vm07 bash[17804]: cluster 2026-03-10T11:27:47.786597+0000 mgr.y (mgr.24310) 96 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:49.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:48 vm05 bash[22470]: cluster 2026-03-10T11:27:47.786597+0000 mgr.y (mgr.24310) 96 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:49.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:48 vm05 bash[17453]: cluster 2026-03-10T11:27:47.786597+0000 mgr.y (mgr.24310) 96 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:51.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:50 vm07 bash[17804]: cluster 2026-03-10T11:27:49.786934+0000 mgr.y (mgr.24310) 97 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:51.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:50 vm05 bash[22470]: cluster 2026-03-10T11:27:49.786934+0000 mgr.y (mgr.24310) 97 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:51.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:50 vm05 bash[17453]: cluster 2026-03-10T11:27:49.786934+0000 mgr.y (mgr.24310) 97 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:51.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:27:51 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:51] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:27:53.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:52 vm07 bash[17804]: cluster 2026-03-10T11:27:51.787215+0000 mgr.y (mgr.24310) 98 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:53.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:52 vm07 bash[17804]: audit 2026-03-10T11:27:51.920495+0000 mgr.y (mgr.24310) 99 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:53.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:52 vm05 bash[22470]: cluster 2026-03-10T11:27:51.787215+0000 mgr.y (mgr.24310) 98 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:53.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:52 vm05 bash[22470]: audit 2026-03-10T11:27:51.920495+0000 mgr.y (mgr.24310) 99 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:53.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:52 vm05 bash[17453]: cluster 2026-03-10T11:27:51.787215+0000 mgr.y (mgr.24310) 98 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:53.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:52 vm05 bash[17453]: audit 2026-03-10T11:27:51.920495+0000 mgr.y (mgr.24310) 99 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:27:53.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:53 vm05 bash[42794]: level=error ts=2026-03-10T11:27:53.506Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:27:53.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:53.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:27:53.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:27:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:27:53.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:27:54.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:27:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:27:53] "GET /metrics HTTP/1.1" 200 207605 "" "Prometheus/2.33.4" 2026-03-10T11:27:55.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:55 vm05 bash[22470]: cluster 2026-03-10T11:27:53.787961+0000 mgr.y (mgr.24310) 100 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:55.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:55 vm05 bash[17453]: cluster 2026-03-10T11:27:53.787961+0000 mgr.y (mgr.24310) 100 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:55.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:55 vm07 bash[17804]: cluster 2026-03-10T11:27:53.787961+0000 mgr.y (mgr.24310) 100 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:57.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:57 vm05 bash[22470]: cluster 2026-03-10T11:27:55.788361+0000 mgr.y (mgr.24310) 101 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:57.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:57 vm05 bash[17453]: cluster 2026-03-10T11:27:55.788361+0000 mgr.y (mgr.24310) 101 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:57 vm07 bash[17804]: cluster 2026-03-10T11:27:55.788361+0000 mgr.y (mgr.24310) 101 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:27:58.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:27:58 vm05 bash[22470]: cluster 2026-03-10T11:27:57.788871+0000 mgr.y (mgr.24310) 102 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:58.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:27:58 vm05 bash[17453]: cluster 2026-03-10T11:27:57.788871+0000 mgr.y (mgr.24310) 102 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:27:58.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:27:58 vm07 bash[17804]: cluster 2026-03-10T11:27:57.788871+0000 mgr.y (mgr.24310) 102 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:01.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:00 vm05 bash[22470]: cluster 2026-03-10T11:27:59.789204+0000 mgr.y (mgr.24310) 103 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:01.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:00 vm05 bash[17453]: cluster 2026-03-10T11:27:59.789204+0000 mgr.y (mgr.24310) 103 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:01.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:00 vm07 bash[17804]: cluster 2026-03-10T11:27:59.789204+0000 mgr.y (mgr.24310) 103 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:01.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:28:01 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:28:03.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:02 vm05 bash[22470]: cluster 2026-03-10T11:28:01.789500+0000 mgr.y (mgr.24310) 104 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:03.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:02 vm05 bash[22470]: audit 2026-03-10T11:28:01.924056+0000 mgr.y (mgr.24310) 105 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:03.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:02 vm05 bash[17453]: cluster 2026-03-10T11:28:01.789500+0000 mgr.y (mgr.24310) 104 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:03.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:02 vm05 bash[17453]: audit 2026-03-10T11:28:01.924056+0000 mgr.y (mgr.24310) 105 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:03.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:03 vm07 bash[17804]: cluster 2026-03-10T11:28:01.789500+0000 mgr.y (mgr.24310) 104 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:03.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:03 vm07 bash[17804]: audit 2026-03-10T11:28:01.924056+0000 mgr.y (mgr.24310) 105 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:03.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:03 vm05 bash[42794]: level=error ts=2026-03-10T11:28:03.507Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:03.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:03.508Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:03.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:03.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:04.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:28:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:03] "GET /metrics HTTP/1.1" 200 207579 "" "Prometheus/2.33.4" 2026-03-10T11:28:05.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:05 vm05 bash[22470]: cluster 2026-03-10T11:28:03.789970+0000 mgr.y (mgr.24310) 106 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:05.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:05 vm05 bash[17453]: cluster 2026-03-10T11:28:03.789970+0000 mgr.y (mgr.24310) 106 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:05.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:05 vm07 bash[17804]: cluster 2026-03-10T11:28:03.789970+0000 mgr.y (mgr.24310) 106 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:07.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:07 vm05 bash[22470]: cluster 2026-03-10T11:28:05.790225+0000 mgr.y (mgr.24310) 107 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:07.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:07 vm05 bash[17453]: cluster 2026-03-10T11:28:05.790225+0000 mgr.y (mgr.24310) 107 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:07.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:07 vm07 bash[17804]: cluster 2026-03-10T11:28:05.790225+0000 mgr.y (mgr.24310) 107 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:08.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:08 vm05 bash[22470]: cluster 2026-03-10T11:28:07.790697+0000 mgr.y (mgr.24310) 108 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:08.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:08 vm05 bash[17453]: cluster 2026-03-10T11:28:07.790697+0000 mgr.y (mgr.24310) 108 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:08.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:08 vm07 bash[17804]: cluster 2026-03-10T11:28:07.790697+0000 mgr.y (mgr.24310) 108 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:11.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:10 vm07 bash[17804]: cluster 2026-03-10T11:28:09.790984+0000 mgr.y (mgr.24310) 109 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:11.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:10 vm05 bash[22470]: cluster 2026-03-10T11:28:09.790984+0000 mgr.y (mgr.24310) 109 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:11.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:10 vm05 bash[17453]: cluster 2026-03-10T11:28:09.790984+0000 mgr.y (mgr.24310) 109 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:11.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:28:11 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:28:13.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:12 vm07 bash[17804]: cluster 2026-03-10T11:28:11.791297+0000 mgr.y (mgr.24310) 110 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:13.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:12 vm07 bash[17804]: audit 2026-03-10T11:28:11.933543+0000 mgr.y (mgr.24310) 111 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:13.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:12 vm05 bash[22470]: cluster 2026-03-10T11:28:11.791297+0000 mgr.y (mgr.24310) 110 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:13.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:12 vm05 bash[22470]: audit 2026-03-10T11:28:11.933543+0000 mgr.y (mgr.24310) 111 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:13.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:12 vm05 bash[17453]: cluster 2026-03-10T11:28:11.791297+0000 mgr.y (mgr.24310) 110 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:13.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:12 vm05 bash[17453]: audit 2026-03-10T11:28:11.933543+0000 mgr.y (mgr.24310) 111 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:13.847 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:13 vm05 bash[42794]: level=error ts=2026-03-10T11:28:13.508Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:13.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:13.509Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:13.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:13.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:14.348 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:28:13 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:13] "GET /metrics HTTP/1.1" 200 207579 "" "Prometheus/2.33.4" 2026-03-10T11:28:15.134 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:14 vm07 bash[17804]: cluster 2026-03-10T11:28:13.791798+0000 mgr.y (mgr.24310) 112 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:15.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:14 vm05 bash[17453]: cluster 2026-03-10T11:28:13.791798+0000 mgr.y (mgr.24310) 112 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:15.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:14 vm05 bash[22470]: cluster 2026-03-10T11:28:13.791798+0000 mgr.y (mgr.24310) 112 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:15 vm07 bash[17804]: audit 2026-03-10T11:28:14.952600+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:28:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:15 vm07 bash[17804]: audit 2026-03-10T11:28:14.953915+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:28:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:15 vm07 bash[17804]: audit 2026-03-10T11:28:14.954809+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:28:16.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:15 vm07 bash[17804]: audit 2026-03-10T11:28:15.138704+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:15 vm05 bash[22470]: audit 2026-03-10T11:28:14.952600+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:15 vm05 bash[22470]: audit 2026-03-10T11:28:14.953915+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:15 vm05 bash[22470]: audit 2026-03-10T11:28:14.954809+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:15 vm05 bash[22470]: audit 2026-03-10T11:28:15.138704+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:15 vm05 bash[17453]: audit 2026-03-10T11:28:14.952600+0000 mon.b (mon.2) 98 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:15 vm05 bash[17453]: audit 2026-03-10T11:28:14.953915+0000 mon.b (mon.2) 99 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:15 vm05 bash[17453]: audit 2026-03-10T11:28:14.954809+0000 mon.b (mon.2) 100 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:28:16.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:15 vm05 bash[17453]: audit 2026-03-10T11:28:15.138704+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:28:17.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:16 vm07 bash[17804]: cluster 2026-03-10T11:28:15.792036+0000 mgr.y (mgr.24310) 113 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:17.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:16 vm07 bash[17804]: audit 2026-03-10T11:28:15.895120+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:28:17.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:16 vm07 bash[17804]: audit 2026-03-10T11:28:15.896338+0000 mon.b (mon.2) 101 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:28:17.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:16 vm07 bash[17804]: audit 2026-03-10T11:28:15.928441+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:28:17.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:16 vm07 bash[17804]: audit 2026-03-10T11:28:15.929639+0000 mon.b (mon.2) 102 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:28:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:16 vm05 bash[22470]: cluster 2026-03-10T11:28:15.792036+0000 mgr.y (mgr.24310) 113 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:16 vm05 bash[22470]: audit 2026-03-10T11:28:15.895120+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:28:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:16 vm05 bash[22470]: audit 2026-03-10T11:28:15.896338+0000 mon.b (mon.2) 101 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:28:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:16 vm05 bash[22470]: audit 2026-03-10T11:28:15.928441+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:28:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:16 vm05 bash[22470]: audit 2026-03-10T11:28:15.929639+0000 mon.b (mon.2) 102 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:28:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:16 vm05 bash[17453]: cluster 2026-03-10T11:28:15.792036+0000 mgr.y (mgr.24310) 113 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:16 vm05 bash[17453]: audit 2026-03-10T11:28:15.895120+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:28:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:16 vm05 bash[17453]: audit 2026-03-10T11:28:15.896338+0000 mon.b (mon.2) 101 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:28:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:16 vm05 bash[17453]: audit 2026-03-10T11:28:15.928441+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:28:17.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:16 vm05 bash[17453]: audit 2026-03-10T11:28:15.929639+0000 mon.b (mon.2) 102 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:28:18.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:18 vm05 bash[22470]: cluster 2026-03-10T11:28:17.792612+0000 mgr.y (mgr.24310) 114 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:18.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:18 vm05 bash[17453]: cluster 2026-03-10T11:28:17.792612+0000 mgr.y (mgr.24310) 114 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:18.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:18 vm07 bash[17804]: cluster 2026-03-10T11:28:17.792612+0000 mgr.y (mgr.24310) 114 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:21.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:20 vm07 bash[17804]: cluster 2026-03-10T11:28:19.792942+0000 mgr.y (mgr.24310) 115 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:21.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:20 vm05 bash[22470]: cluster 2026-03-10T11:28:19.792942+0000 mgr.y (mgr.24310) 115 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:21.348 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:20 vm05 bash[17453]: cluster 2026-03-10T11:28:19.792942+0000 mgr.y (mgr.24310) 115 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:21.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:28:21 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:21] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:28:23.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:22 vm07 bash[17804]: cluster 2026-03-10T11:28:21.793218+0000 mgr.y (mgr.24310) 116 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:23.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:22 vm07 bash[17804]: audit 2026-03-10T11:28:21.943464+0000 mgr.y (mgr.24310) 117 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:23.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:22 vm05 bash[22470]: cluster 2026-03-10T11:28:21.793218+0000 mgr.y (mgr.24310) 116 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:23.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:22 vm05 bash[22470]: audit 2026-03-10T11:28:21.943464+0000 mgr.y (mgr.24310) 117 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:23.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:22 vm05 bash[17453]: cluster 2026-03-10T11:28:21.793218+0000 mgr.y (mgr.24310) 116 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:23.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:22 vm05 bash[17453]: audit 2026-03-10T11:28:21.943464+0000 mgr.y (mgr.24310) 117 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:23.847 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:23 vm05 bash[42794]: level=error ts=2026-03-10T11:28:23.508Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:23.847 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:23.510Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:23.847 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:23.511Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:24.347 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:28:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:23] "GET /metrics HTTP/1.1" 200 207586 "" "Prometheus/2.33.4" 2026-03-10T11:28:25.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:24 vm07 bash[17804]: cluster 2026-03-10T11:28:23.793682+0000 mgr.y (mgr.24310) 118 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:25.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:24 vm05 bash[17453]: cluster 2026-03-10T11:28:23.793682+0000 mgr.y (mgr.24310) 118 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:25.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:24 vm05 bash[22470]: cluster 2026-03-10T11:28:23.793682+0000 mgr.y (mgr.24310) 118 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:27.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:26 vm07 bash[17804]: cluster 2026-03-10T11:28:25.793986+0000 mgr.y (mgr.24310) 119 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:27.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:26 vm05 bash[22470]: cluster 2026-03-10T11:28:25.793986+0000 mgr.y (mgr.24310) 119 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:27.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:26 vm05 bash[17453]: cluster 2026-03-10T11:28:25.793986+0000 mgr.y (mgr.24310) 119 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:28.847 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:28 vm05 bash[22470]: cluster 2026-03-10T11:28:27.794467+0000 mgr.y (mgr.24310) 120 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:28.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:28 vm05 bash[17453]: cluster 2026-03-10T11:28:27.794467+0000 mgr.y (mgr.24310) 120 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:28.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:28 vm07 bash[17804]: cluster 2026-03-10T11:28:27.794467+0000 mgr.y (mgr.24310) 120 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:31.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:30 vm07 bash[17804]: cluster 2026-03-10T11:28:29.794723+0000 mgr.y (mgr.24310) 121 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:31.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:30 vm05 bash[22470]: cluster 2026-03-10T11:28:29.794723+0000 mgr.y (mgr.24310) 121 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:31.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:30 vm05 bash[17453]: cluster 2026-03-10T11:28:29.794723+0000 mgr.y (mgr.24310) 121 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:31.948 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:28:31 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:31] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:28:33.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:33 vm07 bash[17804]: cluster 2026-03-10T11:28:31.795040+0000 mgr.y (mgr.24310) 122 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:33.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:33 vm07 bash[17804]: audit 2026-03-10T11:28:31.953451+0000 mgr.y (mgr.24310) 123 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:33.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:33 vm05 bash[22470]: cluster 2026-03-10T11:28:31.795040+0000 mgr.y (mgr.24310) 122 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:33.512 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:33 vm05 bash[22470]: audit 2026-03-10T11:28:31.953451+0000 mgr.y (mgr.24310) 123 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:33.512 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:33 vm05 bash[17453]: cluster 2026-03-10T11:28:31.795040+0000 mgr.y (mgr.24310) 122 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:33.512 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:33 vm05 bash[17453]: audit 2026-03-10T11:28:31.953451+0000 mgr.y (mgr.24310) 123 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:33.846 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:33 vm05 bash[42794]: level=error ts=2026-03-10T11:28:33.509Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:33.858 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:33.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:33.858 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:33.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:34.346 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:28:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:33] "GET /metrics HTTP/1.1" 200 207636 "" "Prometheus/2.33.4" 2026-03-10T11:28:35.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:35 vm07 bash[17804]: cluster 2026-03-10T11:28:33.795551+0000 mgr.y (mgr.24310) 124 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:35.846 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:35 vm05 bash[22470]: cluster 2026-03-10T11:28:33.795551+0000 mgr.y (mgr.24310) 124 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:35.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:35 vm05 bash[17453]: cluster 2026-03-10T11:28:33.795551+0000 mgr.y (mgr.24310) 124 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:37.698 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:37 vm07 bash[17804]: cluster 2026-03-10T11:28:35.795875+0000 mgr.y (mgr.24310) 125 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:37.846 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:37 vm05 bash[22470]: cluster 2026-03-10T11:28:35.795875+0000 mgr.y (mgr.24310) 125 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:37.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:37 vm05 bash[17453]: cluster 2026-03-10T11:28:35.795875+0000 mgr.y (mgr.24310) 125 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:38.846 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:38 vm05 bash[22470]: cluster 2026-03-10T11:28:37.796344+0000 mgr.y (mgr.24310) 126 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:38.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:38 vm05 bash[17453]: cluster 2026-03-10T11:28:37.796344+0000 mgr.y (mgr.24310) 126 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:38.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:38 vm07 bash[17804]: cluster 2026-03-10T11:28:37.796344+0000 mgr.y (mgr.24310) 126 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:40 vm07 bash[17804]: cluster 2026-03-10T11:28:39.796694+0000 mgr.y (mgr.24310) 127 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:41.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:40 vm05 bash[22470]: cluster 2026-03-10T11:28:39.796694+0000 mgr.y (mgr.24310) 127 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:41.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:40 vm05 bash[17453]: cluster 2026-03-10T11:28:39.796694+0000 mgr.y (mgr.24310) 127 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:41.947 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:28:41 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:28:43.512 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:43 vm05 bash[17453]: cluster 2026-03-10T11:28:41.796983+0000 mgr.y (mgr.24310) 128 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:43.512 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:43 vm05 bash[17453]: audit 2026-03-10T11:28:41.959687+0000 mgr.y (mgr.24310) 129 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:43.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:43 vm07 bash[17804]: cluster 2026-03-10T11:28:41.796983+0000 mgr.y (mgr.24310) 128 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:43.719 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:43 vm07 bash[17804]: audit 2026-03-10T11:28:41.959687+0000 mgr.y (mgr.24310) 129 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:43.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:43 vm05 bash[22470]: cluster 2026-03-10T11:28:41.796983+0000 mgr.y (mgr.24310) 128 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:43.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:43 vm05 bash[22470]: audit 2026-03-10T11:28:41.959687+0000 mgr.y (mgr.24310) 129 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:43 vm05 bash[42794]: level=error ts=2026-03-10T11:28:43.510Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:43.512Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:43.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:44.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:28:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:43] "GET /metrics HTTP/1.1" 200 207636 "" "Prometheus/2.33.4" 2026-03-10T11:28:45.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:45 vm05 bash[22470]: cluster 2026-03-10T11:28:43.797482+0000 mgr.y (mgr.24310) 130 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:45.595 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:45 vm05 bash[17453]: cluster 2026-03-10T11:28:43.797482+0000 mgr.y (mgr.24310) 130 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:45.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:45 vm07 bash[17804]: cluster 2026-03-10T11:28:43.797482+0000 mgr.y (mgr.24310) 130 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:47.595 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:47 vm05 bash[17453]: cluster 2026-03-10T11:28:45.797841+0000 mgr.y (mgr.24310) 131 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:47.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:47 vm05 bash[22470]: cluster 2026-03-10T11:28:45.797841+0000 mgr.y (mgr.24310) 131 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:47.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:47 vm07 bash[17804]: cluster 2026-03-10T11:28:45.797841+0000 mgr.y (mgr.24310) 131 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:48.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:48 vm05 bash[17453]: cluster 2026-03-10T11:28:47.798361+0000 mgr.y (mgr.24310) 132 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:48.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:48 vm05 bash[22470]: cluster 2026-03-10T11:28:47.798361+0000 mgr.y (mgr.24310) 132 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:48.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:48 vm07 bash[17804]: cluster 2026-03-10T11:28:47.798361+0000 mgr.y (mgr.24310) 132 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:51.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:50 vm07 bash[17804]: cluster 2026-03-10T11:28:49.798666+0000 mgr.y (mgr.24310) 133 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:51.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:50 vm05 bash[17453]: cluster 2026-03-10T11:28:49.798666+0000 mgr.y (mgr.24310) 133 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:51.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:50 vm05 bash[22470]: cluster 2026-03-10T11:28:49.798666+0000 mgr.y (mgr.24310) 133 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:51.947 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:28:51 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:51] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:28:53.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:53 vm05 bash[17453]: cluster 2026-03-10T11:28:51.798994+0000 mgr.y (mgr.24310) 134 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:53.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:53 vm05 bash[17453]: audit 2026-03-10T11:28:51.967850+0000 mgr.y (mgr.24310) 135 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:53.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:53 vm05 bash[22470]: cluster 2026-03-10T11:28:51.798994+0000 mgr.y (mgr.24310) 134 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:53.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:53 vm05 bash[22470]: audit 2026-03-10T11:28:51.967850+0000 mgr.y (mgr.24310) 135 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:53.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:53 vm07 bash[17804]: cluster 2026-03-10T11:28:51.798994+0000 mgr.y (mgr.24310) 134 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:53.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:53 vm07 bash[17804]: audit 2026-03-10T11:28:51.967850+0000 mgr.y (mgr.24310) 135 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:28:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:53 vm05 bash[42794]: level=error ts=2026-03-10T11:28:53.511Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:53.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:28:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:28:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:28:53.513Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:28:54.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:28:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:28:53] "GET /metrics HTTP/1.1" 200 207655 "" "Prometheus/2.33.4" 2026-03-10T11:28:55.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:55 vm05 bash[22470]: cluster 2026-03-10T11:28:53.799557+0000 mgr.y (mgr.24310) 136 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:55.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:55 vm05 bash[17453]: cluster 2026-03-10T11:28:53.799557+0000 mgr.y (mgr.24310) 136 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:55.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:55 vm07 bash[17804]: cluster 2026-03-10T11:28:53.799557+0000 mgr.y (mgr.24310) 136 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:57.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:57 vm05 bash[22470]: cluster 2026-03-10T11:28:55.799815+0000 mgr.y (mgr.24310) 137 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:57.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:57 vm05 bash[17453]: cluster 2026-03-10T11:28:55.799815+0000 mgr.y (mgr.24310) 137 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:57.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:57 vm07 bash[17804]: cluster 2026-03-10T11:28:55.799815+0000 mgr.y (mgr.24310) 137 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:28:58.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:28:58 vm05 bash[22470]: cluster 2026-03-10T11:28:57.800255+0000 mgr.y (mgr.24310) 138 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:58.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:28:58 vm05 bash[17453]: cluster 2026-03-10T11:28:57.800255+0000 mgr.y (mgr.24310) 138 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:28:58.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:28:58 vm07 bash[17804]: cluster 2026-03-10T11:28:57.800255+0000 mgr.y (mgr.24310) 138 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:00 vm07 bash[17804]: cluster 2026-03-10T11:28:59.800556+0000 mgr.y (mgr.24310) 139 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:01.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:00 vm05 bash[22470]: cluster 2026-03-10T11:28:59.800556+0000 mgr.y (mgr.24310) 139 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:01.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:00 vm05 bash[17453]: cluster 2026-03-10T11:28:59.800556+0000 mgr.y (mgr.24310) 139 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:01.947 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:29:01 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:29:03.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:02 vm07 bash[17804]: cluster 2026-03-10T11:29:01.800949+0000 mgr.y (mgr.24310) 140 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:03.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:02 vm07 bash[17804]: audit 2026-03-10T11:29:01.975580+0000 mgr.y (mgr.24310) 141 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:03.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:02 vm05 bash[22470]: cluster 2026-03-10T11:29:01.800949+0000 mgr.y (mgr.24310) 140 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:03.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:02 vm05 bash[22470]: audit 2026-03-10T11:29:01.975580+0000 mgr.y (mgr.24310) 141 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:03.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:02 vm05 bash[17453]: cluster 2026-03-10T11:29:01.800949+0000 mgr.y (mgr.24310) 140 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:03.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:02 vm05 bash[17453]: audit 2026-03-10T11:29:01.975580+0000 mgr.y (mgr.24310) 141 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:03 vm05 bash[42794]: level=error ts=2026-03-10T11:29:03.512Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:03.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:03.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:04.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:29:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:03] "GET /metrics HTTP/1.1" 200 207662 "" "Prometheus/2.33.4" 2026-03-10T11:29:05.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:04 vm05 bash[22470]: cluster 2026-03-10T11:29:03.801418+0000 mgr.y (mgr.24310) 142 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:05.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:04 vm05 bash[17453]: cluster 2026-03-10T11:29:03.801418+0000 mgr.y (mgr.24310) 142 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:05.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:04 vm07 bash[17804]: cluster 2026-03-10T11:29:03.801418+0000 mgr.y (mgr.24310) 142 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:07.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:06 vm05 bash[22470]: cluster 2026-03-10T11:29:05.801684+0000 mgr.y (mgr.24310) 143 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:07.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:06 vm05 bash[17453]: cluster 2026-03-10T11:29:05.801684+0000 mgr.y (mgr.24310) 143 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:07.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:06 vm07 bash[17804]: cluster 2026-03-10T11:29:05.801684+0000 mgr.y (mgr.24310) 143 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:08.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:08 vm05 bash[22470]: cluster 2026-03-10T11:29:07.802157+0000 mgr.y (mgr.24310) 144 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:08.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:08 vm05 bash[17453]: cluster 2026-03-10T11:29:07.802157+0000 mgr.y (mgr.24310) 144 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:08.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:08 vm07 bash[17804]: cluster 2026-03-10T11:29:07.802157+0000 mgr.y (mgr.24310) 144 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:10 vm07 bash[17804]: cluster 2026-03-10T11:29:09.802425+0000 mgr.y (mgr.24310) 145 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:11.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:10 vm05 bash[22470]: cluster 2026-03-10T11:29:09.802425+0000 mgr.y (mgr.24310) 145 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:11.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:10 vm05 bash[17453]: cluster 2026-03-10T11:29:09.802425+0000 mgr.y (mgr.24310) 145 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:11.947 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:29:11 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:29:13.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:12 vm07 bash[17804]: cluster 2026-03-10T11:29:11.802850+0000 mgr.y (mgr.24310) 146 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:13.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:12 vm07 bash[17804]: audit 2026-03-10T11:29:11.985950+0000 mgr.y (mgr.24310) 147 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:13.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:12 vm05 bash[22470]: cluster 2026-03-10T11:29:11.802850+0000 mgr.y (mgr.24310) 146 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:13.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:12 vm05 bash[22470]: audit 2026-03-10T11:29:11.985950+0000 mgr.y (mgr.24310) 147 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:13.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:12 vm05 bash[17453]: cluster 2026-03-10T11:29:11.802850+0000 mgr.y (mgr.24310) 146 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:13.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:12 vm05 bash[17453]: audit 2026-03-10T11:29:11.985950+0000 mgr.y (mgr.24310) 147 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:13 vm05 bash[42794]: level=error ts=2026-03-10T11:29:13.513Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:13.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:13.514Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:14.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:29:13 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:13] "GET /metrics HTTP/1.1" 200 207662 "" "Prometheus/2.33.4" 2026-03-10T11:29:15.150 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:14 vm05 bash[17453]: cluster 2026-03-10T11:29:13.803238+0000 mgr.y (mgr.24310) 148 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:15.151 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:14 vm05 bash[22470]: cluster 2026-03-10T11:29:13.803238+0000 mgr.y (mgr.24310) 148 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:15.151 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:14 vm07 bash[17804]: cluster 2026-03-10T11:29:13.803238+0000 mgr.y (mgr.24310) 148 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:16.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:15 vm07 bash[17804]: audit 2026-03-10T11:29:15.141262+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:29:16.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:15 vm07 bash[17804]: audit 2026-03-10T11:29:15.142349+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:29:16.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:15 vm07 bash[17804]: audit 2026-03-10T11:29:15.143126+0000 mon.b (mon.2) 105 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:29:16.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:15 vm07 bash[17804]: audit 2026-03-10T11:29:15.335443+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:15 vm05 bash[22470]: audit 2026-03-10T11:29:15.141262+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:15 vm05 bash[22470]: audit 2026-03-10T11:29:15.142349+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:15 vm05 bash[22470]: audit 2026-03-10T11:29:15.143126+0000 mon.b (mon.2) 105 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:15 vm05 bash[22470]: audit 2026-03-10T11:29:15.335443+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:15 vm05 bash[17453]: audit 2026-03-10T11:29:15.141262+0000 mon.b (mon.2) 103 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:15 vm05 bash[17453]: audit 2026-03-10T11:29:15.142349+0000 mon.b (mon.2) 104 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:15 vm05 bash[17453]: audit 2026-03-10T11:29:15.143126+0000 mon.b (mon.2) 105 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:29:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:15 vm05 bash[17453]: audit 2026-03-10T11:29:15.335443+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:29:17.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:16 vm07 bash[17804]: cluster 2026-03-10T11:29:15.803534+0000 mgr.y (mgr.24310) 149 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:17.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:16 vm07 bash[17804]: audit 2026-03-10T11:29:15.931139+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:29:17.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:16 vm07 bash[17804]: audit 2026-03-10T11:29:15.931859+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:29:17.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:16 vm07 bash[17804]: audit 2026-03-10T11:29:15.959886+0000 mon.b (mon.2) 107 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:29:17.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:16 vm07 bash[17804]: audit 2026-03-10T11:29:15.960665+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:29:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:16 vm05 bash[22470]: cluster 2026-03-10T11:29:15.803534+0000 mgr.y (mgr.24310) 149 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:16 vm05 bash[22470]: audit 2026-03-10T11:29:15.931139+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:29:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:16 vm05 bash[22470]: audit 2026-03-10T11:29:15.931859+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:29:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:16 vm05 bash[22470]: audit 2026-03-10T11:29:15.959886+0000 mon.b (mon.2) 107 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:29:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:16 vm05 bash[22470]: audit 2026-03-10T11:29:15.960665+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:29:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:16 vm05 bash[17453]: cluster 2026-03-10T11:29:15.803534+0000 mgr.y (mgr.24310) 149 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:16 vm05 bash[17453]: audit 2026-03-10T11:29:15.931139+0000 mon.b (mon.2) 106 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:29:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:16 vm05 bash[17453]: audit 2026-03-10T11:29:15.931859+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:29:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:16 vm05 bash[17453]: audit 2026-03-10T11:29:15.959886+0000 mon.b (mon.2) 107 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:29:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:16 vm05 bash[17453]: audit 2026-03-10T11:29:15.960665+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:29:18.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:18 vm05 bash[22470]: cluster 2026-03-10T11:29:17.804017+0000 mgr.y (mgr.24310) 150 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:18 vm05 bash[17453]: cluster 2026-03-10T11:29:17.804017+0000 mgr.y (mgr.24310) 150 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:18 vm07 bash[17804]: cluster 2026-03-10T11:29:17.804017+0000 mgr.y (mgr.24310) 150 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:20 vm07 bash[17804]: cluster 2026-03-10T11:29:19.804331+0000 mgr.y (mgr.24310) 151 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:21.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:20 vm05 bash[22470]: cluster 2026-03-10T11:29:19.804331+0000 mgr.y (mgr.24310) 151 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:21.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:20 vm05 bash[17453]: cluster 2026-03-10T11:29:19.804331+0000 mgr.y (mgr.24310) 151 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:21.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:29:21 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:21] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:29:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:22 vm07 bash[17804]: cluster 2026-03-10T11:29:21.804695+0000 mgr.y (mgr.24310) 152 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:22 vm07 bash[17804]: audit 2026-03-10T11:29:21.994776+0000 mgr.y (mgr.24310) 153 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:22 vm05 bash[22470]: cluster 2026-03-10T11:29:21.804695+0000 mgr.y (mgr.24310) 152 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:22 vm05 bash[22470]: audit 2026-03-10T11:29:21.994776+0000 mgr.y (mgr.24310) 153 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:22 vm05 bash[17453]: cluster 2026-03-10T11:29:21.804695+0000 mgr.y (mgr.24310) 152 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:22 vm05 bash[17453]: audit 2026-03-10T11:29:21.994776+0000 mgr.y (mgr.24310) 153 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:23 vm05 bash[42794]: level=error ts=2026-03-10T11:29:23.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:23.515Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:23.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:24.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:29:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:23] "GET /metrics HTTP/1.1" 200 207650 "" "Prometheus/2.33.4" 2026-03-10T11:29:25.166 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:24 vm05 bash[22470]: cluster 2026-03-10T11:29:23.805039+0000 mgr.y (mgr.24310) 154 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:25.166 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:24 vm05 bash[17453]: cluster 2026-03-10T11:29:23.805039+0000 mgr.y (mgr.24310) 154 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:24 vm07 bash[17804]: cluster 2026-03-10T11:29:23.805039+0000 mgr.y (mgr.24310) 154 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:26 vm07 bash[17804]: cluster 2026-03-10T11:29:25.805308+0000 mgr.y (mgr.24310) 155 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:27.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:26 vm05 bash[22470]: cluster 2026-03-10T11:29:25.805308+0000 mgr.y (mgr.24310) 155 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:27.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:26 vm05 bash[17453]: cluster 2026-03-10T11:29:25.805308+0000 mgr.y (mgr.24310) 155 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:28 vm05 bash[22470]: cluster 2026-03-10T11:29:27.805799+0000 mgr.y (mgr.24310) 156 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:28 vm05 bash[17453]: cluster 2026-03-10T11:29:27.805799+0000 mgr.y (mgr.24310) 156 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:28.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:28 vm07 bash[17804]: cluster 2026-03-10T11:29:27.805799+0000 mgr.y (mgr.24310) 156 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:31.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:30 vm07 bash[17804]: cluster 2026-03-10T11:29:29.806093+0000 mgr.y (mgr.24310) 157 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:31.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:30 vm05 bash[22470]: cluster 2026-03-10T11:29:29.806093+0000 mgr.y (mgr.24310) 157 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:31.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:30 vm05 bash[17453]: cluster 2026-03-10T11:29:29.806093+0000 mgr.y (mgr.24310) 157 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:31.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:29:31 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:31] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:29:33.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:32 vm05 bash[22470]: cluster 2026-03-10T11:29:31.806478+0000 mgr.y (mgr.24310) 158 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:33.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:32 vm05 bash[22470]: audit 2026-03-10T11:29:32.003858+0000 mgr.y (mgr.24310) 159 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:33.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:32 vm05 bash[17453]: cluster 2026-03-10T11:29:31.806478+0000 mgr.y (mgr.24310) 158 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:33.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:32 vm05 bash[17453]: audit 2026-03-10T11:29:32.003858+0000 mgr.y (mgr.24310) 159 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:32 vm07 bash[17804]: cluster 2026-03-10T11:29:31.806478+0000 mgr.y (mgr.24310) 158 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:32 vm07 bash[17804]: audit 2026-03-10T11:29:32.003858+0000 mgr.y (mgr.24310) 159 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:33.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:33 vm05 bash[42794]: level=error ts=2026-03-10T11:29:33.514Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:33.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:33.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:33.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:33.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:34.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:29:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:33] "GET /metrics HTTP/1.1" 200 207627 "" "Prometheus/2.33.4" 2026-03-10T11:29:35.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:34 vm05 bash[22470]: cluster 2026-03-10T11:29:33.806897+0000 mgr.y (mgr.24310) 160 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:35.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:34 vm05 bash[17453]: cluster 2026-03-10T11:29:33.806897+0000 mgr.y (mgr.24310) 160 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:34 vm07 bash[17804]: cluster 2026-03-10T11:29:33.806897+0000 mgr.y (mgr.24310) 160 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:37 vm07 bash[17804]: cluster 2026-03-10T11:29:35.807227+0000 mgr.y (mgr.24310) 161 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:37.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:37 vm05 bash[22470]: cluster 2026-03-10T11:29:35.807227+0000 mgr.y (mgr.24310) 161 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:37.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:37 vm05 bash[17453]: cluster 2026-03-10T11:29:35.807227+0000 mgr.y (mgr.24310) 161 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:38.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:38 vm05 bash[22470]: cluster 2026-03-10T11:29:37.807794+0000 mgr.y (mgr.24310) 162 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:38.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:38 vm05 bash[17453]: cluster 2026-03-10T11:29:37.807794+0000 mgr.y (mgr.24310) 162 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:38 vm07 bash[17804]: cluster 2026-03-10T11:29:37.807794+0000 mgr.y (mgr.24310) 162 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:40 vm07 bash[17804]: cluster 2026-03-10T11:29:39.808159+0000 mgr.y (mgr.24310) 163 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:41.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:40 vm05 bash[22470]: cluster 2026-03-10T11:29:39.808159+0000 mgr.y (mgr.24310) 163 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:41.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:40 vm05 bash[17453]: cluster 2026-03-10T11:29:39.808159+0000 mgr.y (mgr.24310) 163 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:41.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:29:41 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:29:43.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:43 vm05 bash[22470]: cluster 2026-03-10T11:29:41.808552+0000 mgr.y (mgr.24310) 164 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:43.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:43 vm05 bash[22470]: audit 2026-03-10T11:29:42.012156+0000 mgr.y (mgr.24310) 165 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:43.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:43 vm05 bash[17453]: cluster 2026-03-10T11:29:41.808552+0000 mgr.y (mgr.24310) 164 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:43.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:43 vm05 bash[17453]: audit 2026-03-10T11:29:42.012156+0000 mgr.y (mgr.24310) 165 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:43.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:43 vm07 bash[17804]: cluster 2026-03-10T11:29:41.808552+0000 mgr.y (mgr.24310) 164 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:43.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:43 vm07 bash[17804]: audit 2026-03-10T11:29:42.012156+0000 mgr.y (mgr.24310) 165 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:43.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:43 vm05 bash[42794]: level=error ts=2026-03-10T11:29:43.515Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:43.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:43.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:43.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:43.516Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:44.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:29:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:43] "GET /metrics HTTP/1.1" 200 207627 "" "Prometheus/2.33.4" 2026-03-10T11:29:45.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:45 vm05 bash[17453]: cluster 2026-03-10T11:29:43.809000+0000 mgr.y (mgr.24310) 166 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:45.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:45 vm05 bash[22470]: cluster 2026-03-10T11:29:43.809000+0000 mgr.y (mgr.24310) 166 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:45.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:45 vm07 bash[17804]: cluster 2026-03-10T11:29:43.809000+0000 mgr.y (mgr.24310) 166 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:47.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:47 vm05 bash[17453]: cluster 2026-03-10T11:29:45.809318+0000 mgr.y (mgr.24310) 167 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:47.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:47 vm05 bash[22470]: cluster 2026-03-10T11:29:45.809318+0000 mgr.y (mgr.24310) 167 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:47.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:47 vm07 bash[17804]: cluster 2026-03-10T11:29:45.809318+0000 mgr.y (mgr.24310) 167 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:48.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:48 vm07 bash[17804]: cluster 2026-03-10T11:29:47.809954+0000 mgr.y (mgr.24310) 168 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:49.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:48 vm05 bash[17453]: cluster 2026-03-10T11:29:47.809954+0000 mgr.y (mgr.24310) 168 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:49.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:48 vm05 bash[22470]: cluster 2026-03-10T11:29:47.809954+0000 mgr.y (mgr.24310) 168 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:51.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:50 vm07 bash[17804]: cluster 2026-03-10T11:29:49.810304+0000 mgr.y (mgr.24310) 169 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:51.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:50 vm05 bash[17453]: cluster 2026-03-10T11:29:49.810304+0000 mgr.y (mgr.24310) 169 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:51.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:50 vm05 bash[22470]: cluster 2026-03-10T11:29:49.810304+0000 mgr.y (mgr.24310) 169 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:51.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:29:51 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:51] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:29:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:52 vm07 bash[17804]: cluster 2026-03-10T11:29:51.810807+0000 mgr.y (mgr.24310) 170 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:52 vm07 bash[17804]: audit 2026-03-10T11:29:52.021547+0000 mgr.y (mgr.24310) 171 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:53.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:52 vm05 bash[17453]: cluster 2026-03-10T11:29:51.810807+0000 mgr.y (mgr.24310) 170 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:53.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:52 vm05 bash[17453]: audit 2026-03-10T11:29:52.021547+0000 mgr.y (mgr.24310) 171 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:53.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:52 vm05 bash[22470]: cluster 2026-03-10T11:29:51.810807+0000 mgr.y (mgr.24310) 170 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:53.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:52 vm05 bash[22470]: audit 2026-03-10T11:29:52.021547+0000 mgr.y (mgr.24310) 171 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:29:53.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:53 vm05 bash[42794]: level=error ts=2026-03-10T11:29:53.516Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:53.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:53.517Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:29:53.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:29:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:29:53.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:29:54.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:29:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:29:53] "GET /metrics HTTP/1.1" 200 207608 "" "Prometheus/2.33.4" 2026-03-10T11:29:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:54 vm07 bash[17804]: cluster 2026-03-10T11:29:53.811221+0000 mgr.y (mgr.24310) 172 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:55.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:54 vm05 bash[22470]: cluster 2026-03-10T11:29:53.811221+0000 mgr.y (mgr.24310) 172 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:55.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:54 vm05 bash[17453]: cluster 2026-03-10T11:29:53.811221+0000 mgr.y (mgr.24310) 172 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:57.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:56 vm07 bash[17804]: cluster 2026-03-10T11:29:55.811572+0000 mgr.y (mgr.24310) 173 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:57.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:56 vm05 bash[22470]: cluster 2026-03-10T11:29:55.811572+0000 mgr.y (mgr.24310) 173 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:57.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:56 vm05 bash[17453]: cluster 2026-03-10T11:29:55.811572+0000 mgr.y (mgr.24310) 173 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:29:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:29:58 vm07 bash[17804]: cluster 2026-03-10T11:29:57.812127+0000 mgr.y (mgr.24310) 174 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:59.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:29:58 vm05 bash[22470]: cluster 2026-03-10T11:29:57.812127+0000 mgr.y (mgr.24310) 174 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:29:59.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:29:58 vm05 bash[17453]: cluster 2026-03-10T11:29:57.812127+0000 mgr.y (mgr.24310) 174 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:01.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:00 vm07 bash[17804]: cluster 2026-03-10T11:29:59.812833+0000 mgr.y (mgr.24310) 175 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:01.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:00 vm07 bash[17804]: cluster 2026-03-10T11:30:00.000112+0000 mon.a (mon.0) 746 : cluster [INF] overall HEALTH_OK 2026-03-10T11:30:01.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:00 vm05 bash[22470]: cluster 2026-03-10T11:29:59.812833+0000 mgr.y (mgr.24310) 175 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:01.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:00 vm05 bash[22470]: cluster 2026-03-10T11:30:00.000112+0000 mon.a (mon.0) 746 : cluster [INF] overall HEALTH_OK 2026-03-10T11:30:01.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:00 vm05 bash[17453]: cluster 2026-03-10T11:29:59.812833+0000 mgr.y (mgr.24310) 175 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:01.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:00 vm05 bash[17453]: cluster 2026-03-10T11:30:00.000112+0000 mon.a (mon.0) 746 : cluster [INF] overall HEALTH_OK 2026-03-10T11:30:01.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:30:01 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:30:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:02 vm07 bash[17804]: cluster 2026-03-10T11:30:01.813388+0000 mgr.y (mgr.24310) 176 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:02 vm07 bash[17804]: audit 2026-03-10T11:30:02.030768+0000 mgr.y (mgr.24310) 177 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:03.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:02 vm05 bash[22470]: cluster 2026-03-10T11:30:01.813388+0000 mgr.y (mgr.24310) 176 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:03.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:02 vm05 bash[22470]: audit 2026-03-10T11:30:02.030768+0000 mgr.y (mgr.24310) 177 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:03.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:02 vm05 bash[17453]: cluster 2026-03-10T11:30:01.813388+0000 mgr.y (mgr.24310) 176 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:03.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:02 vm05 bash[17453]: audit 2026-03-10T11:30:02.030768+0000 mgr.y (mgr.24310) 177 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:03 vm05 bash[42794]: level=error ts=2026-03-10T11:30:03.517Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:03.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:03.518Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:04.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:30:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:03] "GET /metrics HTTP/1.1" 200 207599 "" "Prometheus/2.33.4" 2026-03-10T11:30:05.174 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:04 vm05 bash[22470]: cluster 2026-03-10T11:30:03.813792+0000 mgr.y (mgr.24310) 178 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:05.174 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:04 vm05 bash[17453]: cluster 2026-03-10T11:30:03.813792+0000 mgr.y (mgr.24310) 178 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:05.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:04 vm07 bash[17804]: cluster 2026-03-10T11:30:03.813792+0000 mgr.y (mgr.24310) 178 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:05.231 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim false --force' 2026-03-10T11:30:05.712 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false --force' 2026-03-10T11:30:06.204 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph config set global log_to_journald false --force' 2026-03-10T11:30:06.673 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:30:07.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:06 vm05 bash[22470]: cluster 2026-03-10T11:30:05.815138+0000 mgr.y (mgr.24310) 179 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:07.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:06 vm05 bash[17453]: cluster 2026-03-10T11:30:05.815138+0000 mgr.y (mgr.24310) 179 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:07.117 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:30:07.117 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (3m) 2m ago 3m 12.0M - ba2b418f427c 3a344bc09343 2026-03-10T11:30:07.117 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (3m) 2m ago 3m 41.8M - 8.3.5 dad864ee21e9 a53e654c60d5 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 2m ago 2m 63.2M - 3.5 e1d6a67b021e 7c51d6393d48 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443 running (6m) 2m ago 6m 397M - 17.2.0 e1d6a67b021e 3dfd87df1da0 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:9283 running (6m) 2m ago 6m 441M - 17.2.0 e1d6a67b021e c74ea9550b91 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (7m) 2m ago 7m 47.2M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (6m) 2m ago 6m 44.7M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (6m) 2m ago 6m 44.5M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (3m) 2m ago 3m 6624k - 1dbe0e931976 77163141ef6d 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (3m) 2m ago 3m 6788k - 1dbe0e931976 142eaa08cfb0 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (5m) 2m ago 5m 44.7M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (5m) 2m ago 5m 46.8M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (5m) 2m ago 5m 44.4M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (5m) 2m ago 5m 43.0M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (4m) 2m ago 4m 44.4M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (4m) 2m ago 4m 42.7M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (4m) 2m ago 4m 41.4M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (4m) 2m ago 4m 43.8M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (3m) 2m ago 3m 37.4M - 514e6a882f6e 979d30e0f128 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (3m) 2m ago 3m 81.2M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:30:07.118 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (3m) 2m ago 3m 81.8M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:30:07.166 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:30:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:06 vm07 bash[17804]: cluster 2026-03-10T11:30:05.815138+0000 mgr.y (mgr.24310) 179 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 15 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:30:07.623 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:30:07.673 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-10T11:30:08.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:07 vm05 bash[22470]: audit 2026-03-10T11:30:07.114071+0000 mgr.y (mgr.24310) 180 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:08.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:07 vm05 bash[22470]: audit 2026-03-10T11:30:07.623781+0000 mon.a (mon.0) 747 : audit [DBG] from='client.? 192.168.123.105:0/1123738616' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:30:08.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:07 vm05 bash[17453]: audit 2026-03-10T11:30:07.114071+0000 mgr.y (mgr.24310) 180 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:08.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:07 vm05 bash[17453]: audit 2026-03-10T11:30:07.623781+0000 mon.a (mon.0) 747 : audit [DBG] from='client.? 192.168.123.105:0/1123738616' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:30:08.109 INFO:teuthology.orchestra.run.vm05.stdout: cluster: 2026-03-10T11:30:08.109 INFO:teuthology.orchestra.run.vm05.stdout: id: 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:30:08.109 INFO:teuthology.orchestra.run.vm05.stdout: health: HEALTH_OK 2026-03-10T11:30:08.109 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:30:08.109 INFO:teuthology.orchestra.run.vm05.stdout: services: 2026-03-10T11:30:08.109 INFO:teuthology.orchestra.run.vm05.stdout: mon: 3 daemons, quorum a,c,b (age 6m) 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: mgr: y(active, since 3m), standbys: x 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: osd: 8 osds: 8 up (since 4m), 8 in (since 4m) 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: data: 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: pools: 6 pools, 161 pgs 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: objects: 209 objects, 457 KiB 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: usage: 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: pgs: 161 active+clean 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: io: 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-10T11:30:08.110 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:30:08.161 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls' 2026-03-10T11:30:08.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:07 vm07 bash[17804]: audit 2026-03-10T11:30:07.114071+0000 mgr.y (mgr.24310) 180 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:08.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:07 vm07 bash[17804]: audit 2026-03-10T11:30:07.623781+0000 mon.a (mon.0) 747 : audit [DBG] from='client.? 192.168.123.105:0/1123738616' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager ?:9093,9094 1/1 2m ago 3m vm05=a;count:1 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:grafana ?:3000 1/1 2m ago 3m vm07=a;count:1 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo 1/1 2m ago 3m count:1 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:mgr 2/2 2m ago 6m vm05=y;vm07=x;count:2 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:mon 3/3 2m ago 6m vm05:192.168.123.105=a;vm05:[v2:192.168.123.105:3301,v1:192.168.123.105:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T11:30:08.599 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter ?:9100 2/2 2m ago 3m vm05=a;vm07=b;count:2 2026-03-10T11:30:08.600 INFO:teuthology.orchestra.run.vm05.stdout:osd 8 2m ago - 2026-03-10T11:30:08.600 INFO:teuthology.orchestra.run.vm05.stdout:prometheus ?:9095 1/1 2m ago 3m vm07=a;count:1 2026-03-10T11:30:08.600 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo ?:8000 2/2 2m ago 3m count:2 2026-03-10T11:30:08.654 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-10T11:30:09.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:08 vm05 bash[22470]: cluster 2026-03-10T11:30:07.815776+0000 mgr.y (mgr.24310) 181 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:09.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:08 vm05 bash[22470]: audit 2026-03-10T11:30:08.110217+0000 mon.a (mon.0) 748 : audit [DBG] from='client.? 192.168.123.105:0/3537260026' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:30:09.084 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:08 vm05 bash[17453]: cluster 2026-03-10T11:30:07.815776+0000 mgr.y (mgr.24310) 181 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:09.084 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:08 vm05 bash[17453]: audit 2026-03-10T11:30:08.110217+0000 mon.a (mon.0) 748 : audit [DBG] from='client.? 192.168.123.105:0/3537260026' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:30:09.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:08 vm07 bash[17804]: cluster 2026-03-10T11:30:07.815776+0000 mgr.y (mgr.24310) 181 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:09.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:08 vm07 bash[17804]: audit 2026-03-10T11:30:08.110217+0000 mon.a (mon.0) 748 : audit [DBG] from='client.? 192.168.123.105:0/3537260026' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:30:09.499 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled to redeploy mgr.x on host 'vm07' 2026-03-10T11:30:09.565 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps --refresh' 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (3m) 2m ago 3m 12.0M - ba2b418f427c 3a344bc09343 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (3m) 2m ago 3m 41.8M - 8.3.5 dad864ee21e9 a53e654c60d5 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 2m ago 2m 63.2M - 3.5 e1d6a67b021e 7c51d6393d48 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443 running (6m) 2m ago 6m 397M - 17.2.0 e1d6a67b021e 3dfd87df1da0 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:9283 running (7m) 2m ago 7m 441M - 17.2.0 e1d6a67b021e c74ea9550b91 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (7m) 2m ago 7m 47.2M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (6m) 2m ago 6m 44.7M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (6m) 2m ago 6m 44.5M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (3m) 2m ago 3m 6624k - 1dbe0e931976 77163141ef6d 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (3m) 2m ago 3m 6788k - 1dbe0e931976 142eaa08cfb0 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (6m) 2m ago 6m 44.7M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (5m) 2m ago 5m 46.8M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (5m) 2m ago 5m 44.4M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (5m) 2m ago 5m 43.0M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (4m) 2m ago 5m 44.4M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (4m) 2m ago 4m 42.7M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (4m) 2m ago 4m 41.4M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (4m) 2m ago 4m 43.8M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (3m) 2m ago 3m 37.4M - 514e6a882f6e 979d30e0f128 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (3m) 2m ago 3m 81.2M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:30:10.009 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (3m) 2m ago 3m 81.8M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:09 vm05 bash[22470]: audit 2026-03-10T11:30:08.597456+0000 mgr.y (mgr.24310) 182 : audit [DBG] from='client.14829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:09 vm05 bash[22470]: audit 2026-03-10T11:30:09.306439+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.105:0/3851102396' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:09 vm05 bash[22470]: audit 2026-03-10T11:30:09.497269+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:09 vm05 bash[22470]: audit 2026-03-10T11:30:09.540498+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:09 vm05 bash[17453]: audit 2026-03-10T11:30:08.597456+0000 mgr.y (mgr.24310) 182 : audit [DBG] from='client.14829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:09 vm05 bash[17453]: audit 2026-03-10T11:30:09.306439+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.105:0/3851102396' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:09 vm05 bash[17453]: audit 2026-03-10T11:30:09.497269+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:30:10.066 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:09 vm05 bash[17453]: audit 2026-03-10T11:30:09.540498+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:30:10.066 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-10T11:30:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:09 vm07 bash[17804]: audit 2026-03-10T11:30:08.597456+0000 mgr.y (mgr.24310) 182 : audit [DBG] from='client.14829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:09 vm07 bash[17804]: audit 2026-03-10T11:30:09.306439+0000 mon.c (mon.1) 34 : audit [DBG] from='client.? 192.168.123.105:0/3851102396' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:30:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:09 vm07 bash[17804]: audit 2026-03-10T11:30:09.497269+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:30:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:09 vm07 bash[17804]: audit 2026-03-10T11:30:09.540498+0000 mon.b (mon.2) 108 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:10 vm05 bash[22470]: audit 2026-03-10T11:30:09.491390+0000 mgr.y (mgr.24310) 183 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.x", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:10 vm05 bash[22470]: cephadm 2026-03-10T11:30:09.500013+0000 mgr.y (mgr.24310) 184 : cephadm [INF] Schedule redeploy daemon mgr.x 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:10 vm05 bash[22470]: cluster 2026-03-10T11:30:09.816069+0000 mgr.y (mgr.24310) 185 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:10 vm05 bash[22470]: audit 2026-03-10T11:30:10.004865+0000 mgr.y (mgr.24310) 186 : audit [DBG] from='client.24727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:10 vm05 bash[17453]: audit 2026-03-10T11:30:09.491390+0000 mgr.y (mgr.24310) 183 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.x", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:10 vm05 bash[17453]: cephadm 2026-03-10T11:30:09.500013+0000 mgr.y (mgr.24310) 184 : cephadm [INF] Schedule redeploy daemon mgr.x 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:10 vm05 bash[17453]: cluster 2026-03-10T11:30:09.816069+0000 mgr.y (mgr.24310) 185 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:11.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:10 vm05 bash[17453]: audit 2026-03-10T11:30:10.004865+0000 mgr.y (mgr.24310) 186 : audit [DBG] from='client.24727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:10 vm07 bash[17804]: audit 2026-03-10T11:30:09.491390+0000 mgr.y (mgr.24310) 183 : audit [DBG] from='client.14838 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.x", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:10 vm07 bash[17804]: cephadm 2026-03-10T11:30:09.500013+0000 mgr.y (mgr.24310) 184 : cephadm [INF] Schedule redeploy daemon mgr.x 2026-03-10T11:30:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:10 vm07 bash[17804]: cluster 2026-03-10T11:30:09.816069+0000 mgr.y (mgr.24310) 185 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:10 vm07 bash[17804]: audit 2026-03-10T11:30:10.004865+0000 mgr.y (mgr.24310) 186 : audit [DBG] from='client.24727 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:30:11.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:30:11 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:30:13.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:12 vm05 bash[22470]: cluster 2026-03-10T11:30:11.816611+0000 mgr.y (mgr.24310) 187 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:13.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:12 vm05 bash[22470]: audit 2026-03-10T11:30:12.040272+0000 mgr.y (mgr.24310) 188 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:13.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:12 vm05 bash[17453]: cluster 2026-03-10T11:30:11.816611+0000 mgr.y (mgr.24310) 187 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:13.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:12 vm05 bash[17453]: audit 2026-03-10T11:30:12.040272+0000 mgr.y (mgr.24310) 188 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:13.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:12 vm07 bash[17804]: cluster 2026-03-10T11:30:11.816611+0000 mgr.y (mgr.24310) 187 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:13.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:12 vm07 bash[17804]: audit 2026-03-10T11:30:12.040272+0000 mgr.y (mgr.24310) 188 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:13 vm05 bash[42794]: level=error ts=2026-03-10T11:30:13.517Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:13.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:13.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:14.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:30:13 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:13] "GET /metrics HTTP/1.1" 200 207599 "" "Prometheus/2.33.4" 2026-03-10T11:30:15.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:14 vm05 bash[22470]: cluster 2026-03-10T11:30:13.816924+0000 mgr.y (mgr.24310) 189 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:15.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:14 vm05 bash[17453]: cluster 2026-03-10T11:30:13.816924+0000 mgr.y (mgr.24310) 189 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:15.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:14 vm07 bash[17804]: cluster 2026-03-10T11:30:13.816924+0000 mgr.y (mgr.24310) 189 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:15 vm05 bash[22470]: audit 2026-03-10T11:30:15.934079+0000 mon.b (mon.2) 109 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:15 vm05 bash[22470]: audit 2026-03-10T11:30:15.934619+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:15 vm05 bash[22470]: audit 2026-03-10T11:30:15.962683+0000 mon.b (mon.2) 110 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:15 vm05 bash[22470]: audit 2026-03-10T11:30:15.963051+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:15 vm05 bash[17453]: audit 2026-03-10T11:30:15.934079+0000 mon.b (mon.2) 109 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:15 vm05 bash[17453]: audit 2026-03-10T11:30:15.934619+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:15 vm05 bash[17453]: audit 2026-03-10T11:30:15.962683+0000 mon.b (mon.2) 110 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:30:16.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:15 vm05 bash[17453]: audit 2026-03-10T11:30:15.963051+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:30:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:15 vm07 bash[17804]: audit 2026-03-10T11:30:15.934079+0000 mon.b (mon.2) 109 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:30:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:15 vm07 bash[17804]: audit 2026-03-10T11:30:15.934619+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:30:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:15 vm07 bash[17804]: audit 2026-03-10T11:30:15.962683+0000 mon.b (mon.2) 110 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:30:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:15 vm07 bash[17804]: audit 2026-03-10T11:30:15.963051+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:30:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:16 vm05 bash[22470]: cluster 2026-03-10T11:30:15.817235+0000 mgr.y (mgr.24310) 190 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:16 vm05 bash[17453]: cluster 2026-03-10T11:30:15.817235+0000 mgr.y (mgr.24310) 190 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:16 vm07 bash[17804]: cluster 2026-03-10T11:30:15.817235+0000 mgr.y (mgr.24310) 190 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:18 vm07 bash[17804]: cluster 2026-03-10T11:30:17.817787+0000 mgr.y (mgr.24310) 191 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:18 vm05 bash[22470]: cluster 2026-03-10T11:30:17.817787+0000 mgr.y (mgr.24310) 191 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:18 vm05 bash[17453]: cluster 2026-03-10T11:30:17.817787+0000 mgr.y (mgr.24310) 191 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:20 vm07 bash[17804]: cluster 2026-03-10T11:30:19.818137+0000 mgr.y (mgr.24310) 192 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:21.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:20 vm05 bash[22470]: cluster 2026-03-10T11:30:19.818137+0000 mgr.y (mgr.24310) 192 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:21.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:20 vm05 bash[17453]: cluster 2026-03-10T11:30:19.818137+0000 mgr.y (mgr.24310) 192 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:21.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:30:21 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:21] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:30:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:22 vm07 bash[17804]: cluster 2026-03-10T11:30:21.818598+0000 mgr.y (mgr.24310) 193 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:22 vm07 bash[17804]: audit 2026-03-10T11:30:22.048828+0000 mgr.y (mgr.24310) 194 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:22 vm05 bash[22470]: cluster 2026-03-10T11:30:21.818598+0000 mgr.y (mgr.24310) 193 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:22 vm05 bash[22470]: audit 2026-03-10T11:30:22.048828+0000 mgr.y (mgr.24310) 194 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:22 vm05 bash[17453]: cluster 2026-03-10T11:30:21.818598+0000 mgr.y (mgr.24310) 193 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:22 vm05 bash[17453]: audit 2026-03-10T11:30:22.048828+0000 mgr.y (mgr.24310) 194 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:23 vm05 bash[42794]: level=error ts=2026-03-10T11:30:23.517Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:23.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:23.519Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:24.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:30:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:23] "GET /metrics HTTP/1.1" 200 207604 "" "Prometheus/2.33.4" 2026-03-10T11:30:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:24 vm07 bash[17804]: cluster 2026-03-10T11:30:23.818886+0000 mgr.y (mgr.24310) 195 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:25.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:24 vm05 bash[22470]: cluster 2026-03-10T11:30:23.818886+0000 mgr.y (mgr.24310) 195 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:25.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:24 vm05 bash[17453]: cluster 2026-03-10T11:30:23.818886+0000 mgr.y (mgr.24310) 195 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:26 vm07 bash[17804]: cluster 2026-03-10T11:30:25.819196+0000 mgr.y (mgr.24310) 196 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:27.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:26 vm05 bash[22470]: cluster 2026-03-10T11:30:25.819196+0000 mgr.y (mgr.24310) 196 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:27.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:26 vm05 bash[17453]: cluster 2026-03-10T11:30:25.819196+0000 mgr.y (mgr.24310) 196 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:29.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:28 vm05 bash[22470]: cluster 2026-03-10T11:30:27.819783+0000 mgr.y (mgr.24310) 197 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:29.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:28 vm05 bash[17453]: cluster 2026-03-10T11:30:27.819783+0000 mgr.y (mgr.24310) 197 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:29.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:28 vm07 bash[17804]: cluster 2026-03-10T11:30:27.819783+0000 mgr.y (mgr.24310) 197 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:31 vm07 bash[17804]: cluster 2026-03-10T11:30:29.820121+0000 mgr.y (mgr.24310) 198 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:31.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:31 vm05 bash[22470]: cluster 2026-03-10T11:30:29.820121+0000 mgr.y (mgr.24310) 198 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:31.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:31 vm05 bash[17453]: cluster 2026-03-10T11:30:29.820121+0000 mgr.y (mgr.24310) 198 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:31.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:30:31 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:31] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:30:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:33 vm07 bash[17804]: cluster 2026-03-10T11:30:31.820514+0000 mgr.y (mgr.24310) 199 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:33 vm07 bash[17804]: audit 2026-03-10T11:30:32.058761+0000 mgr.y (mgr.24310) 200 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:33.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:33 vm05 bash[22470]: cluster 2026-03-10T11:30:31.820514+0000 mgr.y (mgr.24310) 199 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:33.518 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:33 vm05 bash[22470]: audit 2026-03-10T11:30:32.058761+0000 mgr.y (mgr.24310) 200 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:33.518 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:33 vm05 bash[17453]: cluster 2026-03-10T11:30:31.820514+0000 mgr.y (mgr.24310) 199 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:33.518 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:33 vm05 bash[17453]: audit 2026-03-10T11:30:32.058761+0000 mgr.y (mgr.24310) 200 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:33.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:33 vm05 bash[42794]: level=error ts=2026-03-10T11:30:33.518Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:33.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:33.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:33.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:33.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:34.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:30:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:33] "GET /metrics HTTP/1.1" 200 207599 "" "Prometheus/2.33.4" 2026-03-10T11:30:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:35 vm07 bash[17804]: cluster 2026-03-10T11:30:33.820803+0000 mgr.y (mgr.24310) 201 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:35.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:35 vm05 bash[22470]: cluster 2026-03-10T11:30:33.820803+0000 mgr.y (mgr.24310) 201 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:35.595 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:35 vm05 bash[17453]: cluster 2026-03-10T11:30:33.820803+0000 mgr.y (mgr.24310) 201 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:37.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:37 vm05 bash[22470]: cluster 2026-03-10T11:30:35.821073+0000 mgr.y (mgr.24310) 202 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:37.595 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:37 vm05 bash[17453]: cluster 2026-03-10T11:30:35.821073+0000 mgr.y (mgr.24310) 202 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:37.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:37 vm07 bash[17804]: cluster 2026-03-10T11:30:35.821073+0000 mgr.y (mgr.24310) 202 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:39.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:38 vm05 bash[22470]: cluster 2026-03-10T11:30:37.821633+0000 mgr.y (mgr.24310) 203 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:39.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:38 vm05 bash[17453]: cluster 2026-03-10T11:30:37.821633+0000 mgr.y (mgr.24310) 203 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:39.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:38 vm07 bash[17804]: cluster 2026-03-10T11:30:37.821633+0000 mgr.y (mgr.24310) 203 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:40 vm07 bash[17804]: cluster 2026-03-10T11:30:39.821930+0000 mgr.y (mgr.24310) 204 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:41.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:40 vm05 bash[22470]: cluster 2026-03-10T11:30:39.821930+0000 mgr.y (mgr.24310) 204 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:41.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:40 vm05 bash[17453]: cluster 2026-03-10T11:30:39.821930+0000 mgr.y (mgr.24310) 204 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:41.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:30:41 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:30:43.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:42 vm05 bash[22470]: cluster 2026-03-10T11:30:41.822373+0000 mgr.y (mgr.24310) 205 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:43.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:42 vm05 bash[22470]: audit 2026-03-10T11:30:42.066506+0000 mgr.y (mgr.24310) 206 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:43.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:42 vm05 bash[17453]: cluster 2026-03-10T11:30:41.822373+0000 mgr.y (mgr.24310) 205 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:43.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:42 vm05 bash[17453]: audit 2026-03-10T11:30:42.066506+0000 mgr.y (mgr.24310) 206 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:43.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:42 vm07 bash[17804]: cluster 2026-03-10T11:30:41.822373+0000 mgr.y (mgr.24310) 205 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:43.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:42 vm07 bash[17804]: audit 2026-03-10T11:30:42.066506+0000 mgr.y (mgr.24310) 206 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:43 vm05 bash[42794]: level=error ts=2026-03-10T11:30:43.519Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:43.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:43.520Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:44.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:30:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:43] "GET /metrics HTTP/1.1" 200 207599 "" "Prometheus/2.33.4" 2026-03-10T11:30:45.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:44 vm05 bash[22470]: cluster 2026-03-10T11:30:43.822695+0000 mgr.y (mgr.24310) 207 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:45.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:44 vm05 bash[17453]: cluster 2026-03-10T11:30:43.822695+0000 mgr.y (mgr.24310) 207 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:45.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:44 vm07 bash[17804]: cluster 2026-03-10T11:30:43.822695+0000 mgr.y (mgr.24310) 207 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:47.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:46 vm05 bash[22470]: cluster 2026-03-10T11:30:45.823029+0000 mgr.y (mgr.24310) 208 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:47.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:46 vm05 bash[17453]: cluster 2026-03-10T11:30:45.823029+0000 mgr.y (mgr.24310) 208 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:47.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:46 vm07 bash[17804]: cluster 2026-03-10T11:30:45.823029+0000 mgr.y (mgr.24310) 208 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:49.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:48 vm05 bash[22470]: cluster 2026-03-10T11:30:47.823555+0000 mgr.y (mgr.24310) 209 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:49.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:48 vm05 bash[17453]: cluster 2026-03-10T11:30:47.823555+0000 mgr.y (mgr.24310) 209 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:48 vm07 bash[17804]: cluster 2026-03-10T11:30:47.823555+0000 mgr.y (mgr.24310) 209 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:51.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:50 vm07 bash[17804]: cluster 2026-03-10T11:30:49.823836+0000 mgr.y (mgr.24310) 210 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:51.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:50 vm05 bash[22470]: cluster 2026-03-10T11:30:49.823836+0000 mgr.y (mgr.24310) 210 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:51.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:50 vm05 bash[17453]: cluster 2026-03-10T11:30:49.823836+0000 mgr.y (mgr.24310) 210 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:51.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:30:51 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:51] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:30:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:52 vm07 bash[17804]: cluster 2026-03-10T11:30:51.824280+0000 mgr.y (mgr.24310) 211 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:52 vm07 bash[17804]: audit 2026-03-10T11:30:52.076300+0000 mgr.y (mgr.24310) 212 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:53.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:52 vm05 bash[22470]: cluster 2026-03-10T11:30:51.824280+0000 mgr.y (mgr.24310) 211 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:53.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:52 vm05 bash[22470]: audit 2026-03-10T11:30:52.076300+0000 mgr.y (mgr.24310) 212 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:53.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:52 vm05 bash[17453]: cluster 2026-03-10T11:30:51.824280+0000 mgr.y (mgr.24310) 211 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:53.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:52 vm05 bash[17453]: audit 2026-03-10T11:30:52.076300+0000 mgr.y (mgr.24310) 212 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:30:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:53 vm05 bash[42794]: level=error ts=2026-03-10T11:30:53.519Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:53.521Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:30:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:30:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:30:53.522Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:30:54.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:30:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:30:53] "GET /metrics HTTP/1.1" 200 207596 "" "Prometheus/2.33.4" 2026-03-10T11:30:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:54 vm07 bash[17804]: cluster 2026-03-10T11:30:53.824678+0000 mgr.y (mgr.24310) 213 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:55.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:54 vm05 bash[22470]: cluster 2026-03-10T11:30:53.824678+0000 mgr.y (mgr.24310) 213 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:55.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:54 vm05 bash[17453]: cluster 2026-03-10T11:30:53.824678+0000 mgr.y (mgr.24310) 213 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:57.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:56 vm07 bash[17804]: cluster 2026-03-10T11:30:55.824963+0000 mgr.y (mgr.24310) 214 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:57.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:56 vm05 bash[22470]: cluster 2026-03-10T11:30:55.824963+0000 mgr.y (mgr.24310) 214 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:57.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:56 vm05 bash[17453]: cluster 2026-03-10T11:30:55.824963+0000 mgr.y (mgr.24310) 214 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:30:59.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:30:58 vm05 bash[22470]: cluster 2026-03-10T11:30:57.825447+0000 mgr.y (mgr.24310) 215 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:59.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:30:58 vm05 bash[17453]: cluster 2026-03-10T11:30:57.825447+0000 mgr.y (mgr.24310) 215 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:30:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:30:58 vm07 bash[17804]: cluster 2026-03-10T11:30:57.825447+0000 mgr.y (mgr.24310) 215 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:01.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:00 vm07 bash[17804]: cluster 2026-03-10T11:30:59.825751+0000 mgr.y (mgr.24310) 216 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:01.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:00 vm05 bash[22470]: cluster 2026-03-10T11:30:59.825751+0000 mgr.y (mgr.24310) 216 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:01.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:00 vm05 bash[17453]: cluster 2026-03-10T11:30:59.825751+0000 mgr.y (mgr.24310) 216 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:01.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:01 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:31:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:02 vm07 bash[17804]: cluster 2026-03-10T11:31:01.826148+0000 mgr.y (mgr.24310) 217 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:02 vm07 bash[17804]: audit 2026-03-10T11:31:02.083489+0000 mgr.y (mgr.24310) 218 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:03.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:02 vm05 bash[22470]: cluster 2026-03-10T11:31:01.826148+0000 mgr.y (mgr.24310) 217 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:03.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:02 vm05 bash[22470]: audit 2026-03-10T11:31:02.083489+0000 mgr.y (mgr.24310) 218 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:03.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:02 vm05 bash[17453]: cluster 2026-03-10T11:31:01.826148+0000 mgr.y (mgr.24310) 217 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:03.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:02 vm05 bash[17453]: audit 2026-03-10T11:31:02.083489+0000 mgr.y (mgr.24310) 218 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:03 vm05 bash[42794]: level=error ts=2026-03-10T11:31:03.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:03.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:03.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:04.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:31:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:03] "GET /metrics HTTP/1.1" 200 207597 "" "Prometheus/2.33.4" 2026-03-10T11:31:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:04 vm07 bash[17804]: cluster 2026-03-10T11:31:03.826457+0000 mgr.y (mgr.24310) 219 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:05.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:04 vm05 bash[22470]: cluster 2026-03-10T11:31:03.826457+0000 mgr.y (mgr.24310) 219 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:05.357 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:04 vm05 bash[17453]: cluster 2026-03-10T11:31:03.826457+0000 mgr.y (mgr.24310) 219 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:07.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:07 vm07 bash[17804]: cluster 2026-03-10T11:31:05.826758+0000 mgr.y (mgr.24310) 220 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:08.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:07 vm05 bash[22470]: cluster 2026-03-10T11:31:05.826758+0000 mgr.y (mgr.24310) 220 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:08.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:07 vm05 bash[17453]: cluster 2026-03-10T11:31:05.826758+0000 mgr.y (mgr.24310) 220 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:09.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:08 vm05 bash[22470]: cluster 2026-03-10T11:31:07.827268+0000 mgr.y (mgr.24310) 221 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:09.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:08 vm05 bash[17453]: cluster 2026-03-10T11:31:07.827268+0000 mgr.y (mgr.24310) 221 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:08 vm07 bash[17804]: cluster 2026-03-10T11:31:07.827268+0000 mgr.y (mgr.24310) 221 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:11.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:11 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:31:12.337 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:12 vm05 bash[17453]: cluster 2026-03-10T11:31:09.827625+0000 mgr.y (mgr.24310) 222 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:12.337 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:12 vm05 bash[22470]: cluster 2026-03-10T11:31:09.827625+0000 mgr.y (mgr.24310) 222 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:12.378 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:12 vm07 bash[17804]: cluster 2026-03-10T11:31:09.827625+0000 mgr.y (mgr.24310) 222 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:13.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:13 vm07 bash[17804]: cluster 2026-03-10T11:31:11.828217+0000 mgr.y (mgr.24310) 223 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:13.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:13 vm07 bash[17804]: audit 2026-03-10T11:31:12.092038+0000 mgr.y (mgr.24310) 224 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:13.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:13 vm07 bash[17804]: audit 2026-03-10T11:31:12.368482+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:13.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:13 vm07 bash[17804]: audit 2026-03-10T11:31:12.371836+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:13.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:13 vm07 bash[17804]: audit 2026-03-10T11:31:12.372553+0000 mon.b (mon.2) 112 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:31:13.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:13 vm05 bash[22470]: cluster 2026-03-10T11:31:11.828217+0000 mgr.y (mgr.24310) 223 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:13.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:13 vm05 bash[22470]: audit 2026-03-10T11:31:12.092038+0000 mgr.y (mgr.24310) 224 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:13.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:13 vm05 bash[22470]: audit 2026-03-10T11:31:12.368482+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:13.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:13 vm05 bash[22470]: audit 2026-03-10T11:31:12.371836+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:13.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:13 vm05 bash[22470]: audit 2026-03-10T11:31:12.372553+0000 mon.b (mon.2) 112 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:31:13.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[17453]: cluster 2026-03-10T11:31:11.828217+0000 mgr.y (mgr.24310) 223 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:13.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[17453]: audit 2026-03-10T11:31:12.092038+0000 mgr.y (mgr.24310) 224 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:13.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[17453]: audit 2026-03-10T11:31:12.368482+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:13.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[17453]: audit 2026-03-10T11:31:12.371836+0000 mon.b (mon.2) 111 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:13.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[17453]: audit 2026-03-10T11:31:12.372553+0000 mon.b (mon.2) 112 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:31:13.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[42794]: level=error ts=2026-03-10T11:31:13.521Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:13.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:13.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:13.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:13.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:14.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:31:13 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:13] "GET /metrics HTTP/1.1" 200 207597 "" "Prometheus/2.33.4" 2026-03-10T11:31:15.645 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:15 vm07 bash[17804]: cluster 2026-03-10T11:31:13.828569+0000 mgr.y (mgr.24310) 225 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:15.752 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:15 vm05 bash[17453]: cluster 2026-03-10T11:31:13.828569+0000 mgr.y (mgr.24310) 225 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:15.752 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:15 vm05 bash[22470]: cluster 2026-03-10T11:31:13.828569+0000 mgr.y (mgr.24310) 225 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.493687+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.607337+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.759591+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.762943+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.763327+0000 mon.b (mon.2) 113 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.764859+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.765823+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: cephadm 2026-03-10T11:31:15.765935+0000 mgr.y (mgr.24310) 226 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: cluster 2026-03-10T11:31:15.828975+0000 mgr.y (mgr.24310) 227 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.937755+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.938082+0000 mon.b (mon.2) 116 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.965737+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:16 vm05 bash[22470]: audit 2026-03-10T11:31:15.965947+0000 mon.b (mon.2) 117 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.493687+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.607337+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.759591+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.762943+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.763327+0000 mon.b (mon.2) 113 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.764859+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.765823+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: cephadm 2026-03-10T11:31:15.765935+0000 mgr.y (mgr.24310) 226 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: cluster 2026-03-10T11:31:15.828975+0000 mgr.y (mgr.24310) 227 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.937755+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.938082+0000 mon.b (mon.2) 116 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.965737+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:31:16.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:16 vm05 bash[17453]: audit 2026-03-10T11:31:15.965947+0000 mon.b (mon.2) 117 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.493687+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.607337+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.759591+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.762943+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.763327+0000 mon.b (mon.2) 113 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.764859+0000 mon.b (mon.2) 114 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.765823+0000 mon.b (mon.2) 115 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: cephadm 2026-03-10T11:31:15.765935+0000 mgr.y (mgr.24310) 226 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:31:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: cluster 2026-03-10T11:31:15.828975+0000 mgr.y (mgr.24310) 227 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:16.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.937755+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:31:16.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.938082+0000 mon.b (mon.2) 116 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:31:16.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.965737+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:31:16.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:16 vm07 bash[17804]: audit 2026-03-10T11:31:15.965947+0000 mon.b (mon.2) 117 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:31:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:18 vm05 bash[22470]: cluster 2026-03-10T11:31:17.829484+0000 mgr.y (mgr.24310) 228 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:19.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:18 vm05 bash[17453]: cluster 2026-03-10T11:31:17.829484+0000 mgr.y (mgr.24310) 228 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:18 vm07 bash[17804]: cluster 2026-03-10T11:31:17.829484+0000 mgr.y (mgr.24310) 228 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:20 vm07 bash[17804]: cluster 2026-03-10T11:31:19.829767+0000 mgr.y (mgr.24310) 229 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:21.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:20 vm05 bash[22470]: cluster 2026-03-10T11:31:19.829767+0000 mgr.y (mgr.24310) 229 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:21.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:20 vm05 bash[17453]: cluster 2026-03-10T11:31:19.829767+0000 mgr.y (mgr.24310) 229 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:21.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:21 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:21] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:31:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:22 vm07 bash[17804]: cluster 2026-03-10T11:31:21.830273+0000 mgr.y (mgr.24310) 230 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:22 vm07 bash[17804]: audit 2026-03-10T11:31:22.102123+0000 mgr.y (mgr.24310) 231 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:22 vm05 bash[22470]: cluster 2026-03-10T11:31:21.830273+0000 mgr.y (mgr.24310) 230 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:23.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:22 vm05 bash[22470]: audit 2026-03-10T11:31:22.102123+0000 mgr.y (mgr.24310) 231 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:23.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:22 vm05 bash[17453]: cluster 2026-03-10T11:31:21.830273+0000 mgr.y (mgr.24310) 230 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:23.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:22 vm05 bash[17453]: audit 2026-03-10T11:31:22.102123+0000 mgr.y (mgr.24310) 231 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:23.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:23 vm05 bash[42794]: level=error ts=2026-03-10T11:31:23.522Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:23.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:23.523Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:23.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:23.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:24.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:31:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:23] "GET /metrics HTTP/1.1" 200 207589 "" "Prometheus/2.33.4" 2026-03-10T11:31:25.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:25 vm05 bash[22470]: cluster 2026-03-10T11:31:23.830583+0000 mgr.y (mgr.24310) 232 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:25.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:25 vm05 bash[17453]: cluster 2026-03-10T11:31:23.830583+0000 mgr.y (mgr.24310) 232 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:25 vm07 bash[17804]: cluster 2026-03-10T11:31:23.830583+0000 mgr.y (mgr.24310) 232 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:27.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:27 vm05 bash[22470]: cluster 2026-03-10T11:31:25.830931+0000 mgr.y (mgr.24310) 233 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:27.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:27 vm05 bash[17453]: cluster 2026-03-10T11:31:25.830931+0000 mgr.y (mgr.24310) 233 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:27 vm07 bash[17804]: cluster 2026-03-10T11:31:25.830931+0000 mgr.y (mgr.24310) 233 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:29.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:28 vm07 bash[17804]: cluster 2026-03-10T11:31:27.831545+0000 mgr.y (mgr.24310) 234 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:29.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:28 vm05 bash[22470]: cluster 2026-03-10T11:31:27.831545+0000 mgr.y (mgr.24310) 234 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:29.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:28 vm05 bash[17453]: cluster 2026-03-10T11:31:27.831545+0000 mgr.y (mgr.24310) 234 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:31.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:30 vm07 bash[17804]: cluster 2026-03-10T11:31:29.831878+0000 mgr.y (mgr.24310) 235 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:31.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:30 vm05 bash[22470]: cluster 2026-03-10T11:31:29.831878+0000 mgr.y (mgr.24310) 235 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:31.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:30 vm05 bash[17453]: cluster 2026-03-10T11:31:29.831878+0000 mgr.y (mgr.24310) 235 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:31.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:31 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:31] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:31:33.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:32 vm07 bash[17804]: cluster 2026-03-10T11:31:31.832353+0000 mgr.y (mgr.24310) 236 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:33.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:32 vm07 bash[17804]: audit 2026-03-10T11:31:32.112002+0000 mgr.y (mgr.24310) 237 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:33.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:32 vm05 bash[22470]: cluster 2026-03-10T11:31:31.832353+0000 mgr.y (mgr.24310) 236 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:33.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:32 vm05 bash[22470]: audit 2026-03-10T11:31:32.112002+0000 mgr.y (mgr.24310) 237 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:33.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:32 vm05 bash[17453]: cluster 2026-03-10T11:31:31.832353+0000 mgr.y (mgr.24310) 236 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:33.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:32 vm05 bash[17453]: audit 2026-03-10T11:31:32.112002+0000 mgr.y (mgr.24310) 237 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:33.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:33 vm05 bash[42794]: level=error ts=2026-03-10T11:31:33.522Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:33.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:33.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:33.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:33.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:34.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:31:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:33] "GET /metrics HTTP/1.1" 200 207596 "" "Prometheus/2.33.4" 2026-03-10T11:31:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:34 vm07 bash[17804]: cluster 2026-03-10T11:31:33.832697+0000 mgr.y (mgr.24310) 238 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:35.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:34 vm05 bash[22470]: cluster 2026-03-10T11:31:33.832697+0000 mgr.y (mgr.24310) 238 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:35.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:34 vm05 bash[17453]: cluster 2026-03-10T11:31:33.832697+0000 mgr.y (mgr.24310) 238 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:36 vm07 bash[17804]: cluster 2026-03-10T11:31:35.833131+0000 mgr.y (mgr.24310) 239 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:37.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:36 vm05 bash[22470]: cluster 2026-03-10T11:31:35.833131+0000 mgr.y (mgr.24310) 239 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:37.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:36 vm05 bash[17453]: cluster 2026-03-10T11:31:35.833131+0000 mgr.y (mgr.24310) 239 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:39.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:38 vm05 bash[22470]: cluster 2026-03-10T11:31:37.833693+0000 mgr.y (mgr.24310) 240 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:39.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:38 vm05 bash[17453]: cluster 2026-03-10T11:31:37.833693+0000 mgr.y (mgr.24310) 240 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:39.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:38 vm07 bash[17804]: cluster 2026-03-10T11:31:37.833693+0000 mgr.y (mgr.24310) 240 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:40 vm07 bash[17804]: cluster 2026-03-10T11:31:39.834012+0000 mgr.y (mgr.24310) 241 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:41.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:40 vm05 bash[22470]: cluster 2026-03-10T11:31:39.834012+0000 mgr.y (mgr.24310) 241 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:41.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:40 vm05 bash[17453]: cluster 2026-03-10T11:31:39.834012+0000 mgr.y (mgr.24310) 241 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:41.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:41 vm07 bash[18531]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:31:43.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:43 vm05 bash[22470]: cluster 2026-03-10T11:31:41.834509+0000 mgr.y (mgr.24310) 242 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:43.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:43 vm05 bash[22470]: audit 2026-03-10T11:31:42.117767+0000 mgr.y (mgr.24310) 243 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:43.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:43 vm05 bash[17453]: cluster 2026-03-10T11:31:41.834509+0000 mgr.y (mgr.24310) 242 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:43.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:43 vm05 bash[17453]: audit 2026-03-10T11:31:42.117767+0000 mgr.y (mgr.24310) 243 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:43.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:43 vm07 bash[17804]: cluster 2026-03-10T11:31:41.834509+0000 mgr.y (mgr.24310) 242 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:43.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:43 vm07 bash[17804]: audit 2026-03-10T11:31:42.117767+0000 mgr.y (mgr.24310) 243 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:43.525 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:43 vm05 bash[42794]: level=error ts=2026-03-10T11:31:43.523Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:43.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:43.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:43.525Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:44.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:31:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:43] "GET /metrics HTTP/1.1" 200 207596 "" "Prometheus/2.33.4" 2026-03-10T11:31:46.046 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:45 vm07 bash[17804]: cluster 2026-03-10T11:31:43.834816+0000 mgr.y (mgr.24310) 244 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:46.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:45 vm05 bash[22470]: cluster 2026-03-10T11:31:43.834816+0000 mgr.y (mgr.24310) 244 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:46.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:45 vm05 bash[17453]: cluster 2026-03-10T11:31:43.834816+0000 mgr.y (mgr.24310) 244 : cluster [DBG] pgmap v198: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:46 vm05 bash[22470]: cluster 2026-03-10T11:31:45.835271+0000 mgr.y (mgr.24310) 245 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:46 vm05 bash[17453]: cluster 2026-03-10T11:31:45.835271+0000 mgr.y (mgr.24310) 245 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:47.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:46 vm07 bash[17804]: cluster 2026-03-10T11:31:45.835271+0000 mgr.y (mgr.24310) 245 : cluster [DBG] pgmap v199: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:49.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:49 vm05 bash[22470]: cluster 2026-03-10T11:31:47.835795+0000 mgr.y (mgr.24310) 246 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:49.595 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:49 vm05 bash[17453]: cluster 2026-03-10T11:31:47.835795+0000 mgr.y (mgr.24310) 246 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:49.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:49 vm07 bash[17804]: cluster 2026-03-10T11:31:47.835795+0000 mgr.y (mgr.24310) 246 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:50.999 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:50.999 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:50.999 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:50.999 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:50.999 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:50.999 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:50.999 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.000 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:31:50 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: Stopping Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36573]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mgr.x 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36580]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mgr-x 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36613]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mgr.x 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x.service: Failed with result 'exit-code'. 2026-03-10T11:31:51.250 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: Stopped Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:31:51.250 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.250 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 bash[17804]: cluster 2026-03-10T11:31:49.836137+0000 mgr.y (mgr.24310) 247 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 bash[17804]: audit 2026-03-10T11:31:51.347729+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 bash[17804]: audit 2026-03-10T11:31:51.354807+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 bash[17804]: audit 2026-03-10T11:31:51.357880+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 bash[17804]: audit 2026-03-10T11:31:51.358858+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:51 vm07 bash[17804]: audit 2026-03-10T11:31:51.360366+0000 mon.b (mon.2) 120 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: Started Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.537 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:31:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:24: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:31:51.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:51 vm05 bash[22470]: cluster 2026-03-10T11:31:49.836137+0000 mgr.y (mgr.24310) 247 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:51 vm05 bash[22470]: audit 2026-03-10T11:31:51.347729+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:51 vm05 bash[22470]: audit 2026-03-10T11:31:51.354807+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:51 vm05 bash[22470]: audit 2026-03-10T11:31:51.357880+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:51 vm05 bash[22470]: audit 2026-03-10T11:31:51.358858+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:51 vm05 bash[22470]: audit 2026-03-10T11:31:51.360366+0000 mon.b (mon.2) 120 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:51 vm05 bash[17453]: cluster 2026-03-10T11:31:49.836137+0000 mgr.y (mgr.24310) 247 : cluster [DBG] pgmap v201: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:51 vm05 bash[17453]: audit 2026-03-10T11:31:51.347729+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:51 vm05 bash[17453]: audit 2026-03-10T11:31:51.354807+0000 mon.a (mon.0) 760 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:51 vm05 bash[17453]: audit 2026-03-10T11:31:51.357880+0000 mon.b (mon.2) 118 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:51 vm05 bash[17453]: audit 2026-03-10T11:31:51.358858+0000 mon.b (mon.2) 119 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:31:51.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:51 vm05 bash[17453]: audit 2026-03-10T11:31:51.360366+0000 mon.b (mon.2) 120 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:31:51.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36672]: debug 2026-03-10T11:31:51.533+0000 7f4735b5e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:31:51.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36672]: debug 2026-03-10T11:31:51.565+0000 7f4735b5e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:31:51.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36672]: debug 2026-03-10T11:31:51.681+0000 7f4735b5e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:31:52.407 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:51 vm07 bash[36672]: debug 2026-03-10T11:31:51.961+0000 7f4735b5e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:31:52.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.401+0000 7f4735b5e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:31:52.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.489+0000 7f4735b5e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:31:52.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:31:52.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:31:52.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: from numpy import show_config as show_numpy_config 2026-03-10T11:31:52.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.629+0000 7f4735b5e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:31:53.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.769+0000 7f4735b5e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:31:53.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.805+0000 7f4735b5e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:31:53.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.841+0000 7f4735b5e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:31:53.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.885+0000 7f4735b5e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:31:53.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:52 vm07 bash[36672]: debug 2026-03-10T11:31:52.937+0000 7f4735b5e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:31:53.692 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:53 vm07 bash[17804]: cluster 2026-03-10T11:31:51.836689+0000 mgr.y (mgr.24310) 248 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:53.692 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:53 vm07 bash[17804]: audit 2026-03-10T11:31:52.125701+0000 mgr.y (mgr.24310) 249 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:53.692 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.401+0000 7f4735b5e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:31:53.693 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.445+0000 7f4735b5e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:31:53.693 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.485+0000 7f4735b5e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:31:53.693 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.645+0000 7f4735b5e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:31:53.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:53 vm05 bash[22470]: cluster 2026-03-10T11:31:51.836689+0000 mgr.y (mgr.24310) 248 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:53.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:53 vm05 bash[22470]: audit 2026-03-10T11:31:52.125701+0000 mgr.y (mgr.24310) 249 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:53.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:53 vm05 bash[17453]: cluster 2026-03-10T11:31:51.836689+0000 mgr.y (mgr.24310) 248 : cluster [DBG] pgmap v202: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:53.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:53 vm05 bash[17453]: audit 2026-03-10T11:31:52.125701+0000 mgr.y (mgr.24310) 249 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:31:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:53 vm05 bash[42794]: level=error ts=2026-03-10T11:31:53.524Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:53.524Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": dial tcp 192.168.123.107:8443: connect: connection refused" 2026-03-10T11:31:53.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:53.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:31:53.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.689+0000 7f4735b5e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:31:53.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.729+0000 7f4735b5e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:31:53.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:53 vm07 bash[36672]: debug 2026-03-10T11:31:53.841+0000 7f4735b5e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:31:54.302 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: debug 2026-03-10T11:31:54.001+0000 7f4735b5e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:31:54.302 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: debug 2026-03-10T11:31:54.193+0000 7f4735b5e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:31:54.302 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: debug 2026-03-10T11:31:54.245+0000 7f4735b5e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:31:54.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:31:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:31:53] "GET /metrics HTTP/1.1" 200 207590 "" "Prometheus/2.33.4" 2026-03-10T11:31:54.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: debug 2026-03-10T11:31:54.297+0000 7f4735b5e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:31:54.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: debug 2026-03-10T11:31:54.485+0000 7f4735b5e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:31:55.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: debug 2026-03-10T11:31:54.793+0000 7f4735b5e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:31:55.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: [10/Mar/2026:11:31:54] ENGINE Bus STARTING 2026-03-10T11:31:55.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: CherryPy Checker: 2026-03-10T11:31:55.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: The Application mounted at '' has an empty config. 2026-03-10T11:31:55.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: [10/Mar/2026:11:31:54] ENGINE Serving on http://:::9283 2026-03-10T11:31:55.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:31:54 vm07 bash[36672]: [10/Mar/2026:11:31:54] ENGINE Bus STARTED 2026-03-10T11:31:55.345 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[42794]: level=warn ts=2026-03-10T11:31:55.064Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=3 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: cluster 2026-03-10T11:31:53.837022+0000 mgr.y (mgr.24310) 250 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: audit 2026-03-10T11:31:54.730126+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: audit 2026-03-10T11:31:54.737580+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: cluster 2026-03-10T11:31:54.801010+0000 mon.a (mon.0) 763 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: cluster 2026-03-10T11:31:54.801136+0000 mon.a (mon.0) 764 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: audit 2026-03-10T11:31:54.803375+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: audit 2026-03-10T11:31:54.803961+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: audit 2026-03-10T11:31:54.804944+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:31:55.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:55 vm07 bash[17804]: audit 2026-03-10T11:31:54.805455+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:31:55.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: cluster 2026-03-10T11:31:53.837022+0000 mgr.y (mgr.24310) 250 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: audit 2026-03-10T11:31:54.730126+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: audit 2026-03-10T11:31:54.737580+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: cluster 2026-03-10T11:31:54.801010+0000 mon.a (mon.0) 763 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: cluster 2026-03-10T11:31:54.801136+0000 mon.a (mon.0) 764 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: audit 2026-03-10T11:31:54.803375+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: audit 2026-03-10T11:31:54.803961+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: audit 2026-03-10T11:31:54.804944+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:55 vm05 bash[22470]: audit 2026-03-10T11:31:54.805455+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: cluster 2026-03-10T11:31:53.837022+0000 mgr.y (mgr.24310) 250 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: audit 2026-03-10T11:31:54.730126+0000 mon.a (mon.0) 761 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: audit 2026-03-10T11:31:54.737580+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: cluster 2026-03-10T11:31:54.801010+0000 mon.a (mon.0) 763 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: cluster 2026-03-10T11:31:54.801136+0000 mon.a (mon.0) 764 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: audit 2026-03-10T11:31:54.803375+0000 mon.c (mon.1) 35 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: audit 2026-03-10T11:31:54.803961+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: audit 2026-03-10T11:31:54.804944+0000 mon.c (mon.1) 37 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:31:55.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:55 vm05 bash[17453]: audit 2026-03-10T11:31:54.805455+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.? 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:31:57.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:56 vm05 bash[22470]: cluster 2026-03-10T11:31:55.757848+0000 mon.a (mon.0) 765 : cluster [DBG] mgrmap e22: y(active, since 5m), standbys: x 2026-03-10T11:31:57.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:56 vm05 bash[22470]: cluster 2026-03-10T11:31:55.837345+0000 mgr.y (mgr.24310) 251 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:57.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:56 vm05 bash[17453]: cluster 2026-03-10T11:31:55.757848+0000 mon.a (mon.0) 765 : cluster [DBG] mgrmap e22: y(active, since 5m), standbys: x 2026-03-10T11:31:57.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:56 vm05 bash[17453]: cluster 2026-03-10T11:31:55.837345+0000 mgr.y (mgr.24310) 251 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:57.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:56 vm07 bash[17804]: cluster 2026-03-10T11:31:55.757848+0000 mon.a (mon.0) 765 : cluster [DBG] mgrmap e22: y(active, since 5m), standbys: x 2026-03-10T11:31:57.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:56 vm07 bash[17804]: cluster 2026-03-10T11:31:55.837345+0000 mgr.y (mgr.24310) 251 : cluster [DBG] pgmap v204: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:31:59.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:31:58 vm05 bash[22470]: cluster 2026-03-10T11:31:57.837887+0000 mgr.y (mgr.24310) 252 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:59.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:31:58 vm05 bash[17453]: cluster 2026-03-10T11:31:57.837887+0000 mgr.y (mgr.24310) 252 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:31:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:31:58 vm07 bash[17804]: cluster 2026-03-10T11:31:57.837887+0000 mgr.y (mgr.24310) 252 : cluster [DBG] pgmap v205: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:01.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:00 vm07 bash[17804]: cluster 2026-03-10T11:31:59.838228+0000 mgr.y (mgr.24310) 253 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:01.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:00 vm05 bash[22470]: cluster 2026-03-10T11:31:59.838228+0000 mgr.y (mgr.24310) 253 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:01.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:00 vm05 bash[17453]: cluster 2026-03-10T11:31:59.838228+0000 mgr.y (mgr.24310) 253 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:01.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:32:01 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:32:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:02 vm07 bash[17804]: cluster 2026-03-10T11:32:01.838740+0000 mgr.y (mgr.24310) 254 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:02 vm07 bash[17804]: audit 2026-03-10T11:32:02.132059+0000 mgr.y (mgr.24310) 255 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:03.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:02 vm05 bash[22470]: cluster 2026-03-10T11:32:01.838740+0000 mgr.y (mgr.24310) 254 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:03.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:02 vm05 bash[22470]: audit 2026-03-10T11:32:02.132059+0000 mgr.y (mgr.24310) 255 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:03.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:02 vm05 bash[17453]: cluster 2026-03-10T11:32:01.838740+0000 mgr.y (mgr.24310) 254 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:03.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:02 vm05 bash[17453]: audit 2026-03-10T11:32:02.132059+0000 mgr.y (mgr.24310) 255 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:03 vm05 bash[42794]: level=error ts=2026-03-10T11:32:03.524Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:03.526Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:03.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:03.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:04.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:32:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:03] "GET /metrics HTTP/1.1" 200 207591 "" "Prometheus/2.33.4" 2026-03-10T11:32:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:04 vm07 bash[17804]: cluster 2026-03-10T11:32:03.839134+0000 mgr.y (mgr.24310) 256 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:05.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:04 vm05 bash[22470]: cluster 2026-03-10T11:32:03.839134+0000 mgr.y (mgr.24310) 256 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:05.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:04 vm05 bash[17453]: cluster 2026-03-10T11:32:03.839134+0000 mgr.y (mgr.24310) 256 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:06 vm07 bash[17804]: cluster 2026-03-10T11:32:05.839508+0000 mgr.y (mgr.24310) 257 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:07.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:06 vm05 bash[22470]: cluster 2026-03-10T11:32:05.839508+0000 mgr.y (mgr.24310) 257 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:07.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:06 vm05 bash[17453]: cluster 2026-03-10T11:32:05.839508+0000 mgr.y (mgr.24310) 257 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:09.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:08 vm05 bash[22470]: cluster 2026-03-10T11:32:07.840084+0000 mgr.y (mgr.24310) 258 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:09.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:08 vm05 bash[17453]: cluster 2026-03-10T11:32:07.840084+0000 mgr.y (mgr.24310) 258 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:08 vm07 bash[17804]: cluster 2026-03-10T11:32:07.840084+0000 mgr.y (mgr.24310) 258 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:10 vm07 bash[17804]: cluster 2026-03-10T11:32:09.840401+0000 mgr.y (mgr.24310) 259 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:11.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:10 vm05 bash[22470]: cluster 2026-03-10T11:32:09.840401+0000 mgr.y (mgr.24310) 259 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:11.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:10 vm05 bash[17453]: cluster 2026-03-10T11:32:09.840401+0000 mgr.y (mgr.24310) 259 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:11.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:32:11 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:32:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:12 vm07 bash[17804]: cluster 2026-03-10T11:32:11.840972+0000 mgr.y (mgr.24310) 260 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:12 vm07 bash[17804]: audit 2026-03-10T11:32:12.142275+0000 mgr.y (mgr.24310) 261 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:13.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:12 vm05 bash[22470]: cluster 2026-03-10T11:32:11.840972+0000 mgr.y (mgr.24310) 260 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:13.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:12 vm05 bash[22470]: audit 2026-03-10T11:32:12.142275+0000 mgr.y (mgr.24310) 261 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:13.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:12 vm05 bash[17453]: cluster 2026-03-10T11:32:11.840972+0000 mgr.y (mgr.24310) 260 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:13.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:12 vm05 bash[17453]: audit 2026-03-10T11:32:12.142275+0000 mgr.y (mgr.24310) 261 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:13.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:13 vm05 bash[42794]: level=error ts=2026-03-10T11:32:13.526Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:13.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:13.527Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:13.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:13.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:14.345 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:32:13 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:13] "GET /metrics HTTP/1.1" 200 207591 "" "Prometheus/2.33.4" 2026-03-10T11:32:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:14 vm07 bash[17804]: cluster 2026-03-10T11:32:13.841447+0000 mgr.y (mgr.24310) 262 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:15.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:14 vm05 bash[22470]: cluster 2026-03-10T11:32:13.841447+0000 mgr.y (mgr.24310) 262 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:15.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:14 vm05 bash[17453]: cluster 2026-03-10T11:32:13.841447+0000 mgr.y (mgr.24310) 262 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:16 vm07 bash[17804]: cluster 2026-03-10T11:32:15.841778+0000 mgr.y (mgr.24310) 263 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:16 vm07 bash[17804]: audit 2026-03-10T11:32:15.941232+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:32:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:16 vm07 bash[17804]: audit 2026-03-10T11:32:15.941513+0000 mon.b (mon.2) 121 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:32:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:16 vm07 bash[17804]: audit 2026-03-10T11:32:15.968443+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:32:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:16 vm07 bash[17804]: audit 2026-03-10T11:32:15.968644+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:32:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:16 vm05 bash[22470]: cluster 2026-03-10T11:32:15.841778+0000 mgr.y (mgr.24310) 263 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:16 vm05 bash[22470]: audit 2026-03-10T11:32:15.941232+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:32:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:16 vm05 bash[22470]: audit 2026-03-10T11:32:15.941513+0000 mon.b (mon.2) 121 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:32:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:16 vm05 bash[22470]: audit 2026-03-10T11:32:15.968443+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:32:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:16 vm05 bash[22470]: audit 2026-03-10T11:32:15.968644+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:32:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:16 vm05 bash[17453]: cluster 2026-03-10T11:32:15.841778+0000 mgr.y (mgr.24310) 263 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:16 vm05 bash[17453]: audit 2026-03-10T11:32:15.941232+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:32:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:16 vm05 bash[17453]: audit 2026-03-10T11:32:15.941513+0000 mon.b (mon.2) 121 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:32:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:16 vm05 bash[17453]: audit 2026-03-10T11:32:15.968443+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24310 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:32:17.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:16 vm05 bash[17453]: audit 2026-03-10T11:32:15.968644+0000 mon.b (mon.2) 122 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:32:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:18 vm05 bash[22470]: cluster 2026-03-10T11:32:17.842290+0000 mgr.y (mgr.24310) 264 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:19.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:18 vm05 bash[17453]: cluster 2026-03-10T11:32:17.842290+0000 mgr.y (mgr.24310) 264 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:18 vm07 bash[17804]: cluster 2026-03-10T11:32:17.842290+0000 mgr.y (mgr.24310) 264 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:20 vm07 bash[17804]: cluster 2026-03-10T11:32:19.842595+0000 mgr.y (mgr.24310) 265 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:21.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:20 vm05 bash[22470]: cluster 2026-03-10T11:32:19.842595+0000 mgr.y (mgr.24310) 265 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:21.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:20 vm05 bash[17453]: cluster 2026-03-10T11:32:19.842595+0000 mgr.y (mgr.24310) 265 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:21.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:32:21 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:21] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:32:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:22 vm07 bash[17804]: cluster 2026-03-10T11:32:21.843047+0000 mgr.y (mgr.24310) 266 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:22 vm07 bash[17804]: audit 2026-03-10T11:32:22.150745+0000 mgr.y (mgr.24310) 267 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:22 vm05 bash[22470]: cluster 2026-03-10T11:32:21.843047+0000 mgr.y (mgr.24310) 266 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:23.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:22 vm05 bash[22470]: audit 2026-03-10T11:32:22.150745+0000 mgr.y (mgr.24310) 267 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:22 vm05 bash[17453]: cluster 2026-03-10T11:32:21.843047+0000 mgr.y (mgr.24310) 266 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:22 vm05 bash[17453]: audit 2026-03-10T11:32:22.150745+0000 mgr.y (mgr.24310) 267 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:23 vm05 bash[42794]: level=error ts=2026-03-10T11:32:23.526Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:23.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:23.528Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:24.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:32:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:23] "GET /metrics HTTP/1.1" 200 207591 "" "Prometheus/2.33.4" 2026-03-10T11:32:25.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:24 vm05 bash[22470]: cluster 2026-03-10T11:32:23.843394+0000 mgr.y (mgr.24310) 268 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:25.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:24 vm05 bash[17453]: cluster 2026-03-10T11:32:23.843394+0000 mgr.y (mgr.24310) 268 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:24 vm07 bash[17804]: cluster 2026-03-10T11:32:23.843394+0000 mgr.y (mgr.24310) 268 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:27.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:26 vm05 bash[22470]: cluster 2026-03-10T11:32:25.843707+0000 mgr.y (mgr.24310) 269 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:27.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:26 vm05 bash[17453]: cluster 2026-03-10T11:32:25.843707+0000 mgr.y (mgr.24310) 269 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:26 vm07 bash[17804]: cluster 2026-03-10T11:32:25.843707+0000 mgr.y (mgr.24310) 269 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:29.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:28 vm05 bash[22470]: cluster 2026-03-10T11:32:27.844228+0000 mgr.y (mgr.24310) 270 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:29.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:28 vm05 bash[17453]: cluster 2026-03-10T11:32:27.844228+0000 mgr.y (mgr.24310) 270 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:29.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:28 vm07 bash[17804]: cluster 2026-03-10T11:32:27.844228+0000 mgr.y (mgr.24310) 270 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:31.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:30 vm07 bash[17804]: cluster 2026-03-10T11:32:29.844523+0000 mgr.y (mgr.24310) 271 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:31.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:30 vm05 bash[22470]: cluster 2026-03-10T11:32:29.844523+0000 mgr.y (mgr.24310) 271 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:31.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:30 vm05 bash[17453]: cluster 2026-03-10T11:32:29.844523+0000 mgr.y (mgr.24310) 271 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:31.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:32:31 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:31] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:32:33.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:32 vm07 bash[17804]: cluster 2026-03-10T11:32:31.844924+0000 mgr.y (mgr.24310) 272 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:33.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:32 vm07 bash[17804]: audit 2026-03-10T11:32:32.160663+0000 mgr.y (mgr.24310) 273 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:33.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:32 vm05 bash[17453]: cluster 2026-03-10T11:32:31.844924+0000 mgr.y (mgr.24310) 272 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:33.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:32 vm05 bash[17453]: audit 2026-03-10T11:32:32.160663+0000 mgr.y (mgr.24310) 273 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:33.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:32 vm05 bash[22470]: cluster 2026-03-10T11:32:31.844924+0000 mgr.y (mgr.24310) 272 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:33.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:32 vm05 bash[22470]: audit 2026-03-10T11:32:32.160663+0000 mgr.y (mgr.24310) 273 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:33.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:33 vm05 bash[42794]: level=error ts=2026-03-10T11:32:33.527Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:33.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:33.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:33.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:33 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:33.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:34.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:32:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:33] "GET /metrics HTTP/1.1" 200 207600 "" "Prometheus/2.33.4" 2026-03-10T11:32:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:34 vm07 bash[17804]: cluster 2026-03-10T11:32:33.845194+0000 mgr.y (mgr.24310) 274 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:35.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:34 vm05 bash[22470]: cluster 2026-03-10T11:32:33.845194+0000 mgr.y (mgr.24310) 274 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:35.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:34 vm05 bash[17453]: cluster 2026-03-10T11:32:33.845194+0000 mgr.y (mgr.24310) 274 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:36 vm07 bash[17804]: cluster 2026-03-10T11:32:35.845483+0000 mgr.y (mgr.24310) 275 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:36 vm05 bash[22470]: cluster 2026-03-10T11:32:35.845483+0000 mgr.y (mgr.24310) 275 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:36 vm05 bash[17453]: cluster 2026-03-10T11:32:35.845483+0000 mgr.y (mgr.24310) 275 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:39.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:38 vm05 bash[22470]: cluster 2026-03-10T11:32:37.845998+0000 mgr.y (mgr.24310) 276 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:39.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:38 vm05 bash[17453]: cluster 2026-03-10T11:32:37.845998+0000 mgr.y (mgr.24310) 276 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:39.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:38 vm07 bash[17804]: cluster 2026-03-10T11:32:37.845998+0000 mgr.y (mgr.24310) 276 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:40 vm07 bash[17804]: cluster 2026-03-10T11:32:39.846304+0000 mgr.y (mgr.24310) 277 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:41.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:40 vm05 bash[22470]: cluster 2026-03-10T11:32:39.846304+0000 mgr.y (mgr.24310) 277 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:41.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:40 vm05 bash[17453]: cluster 2026-03-10T11:32:39.846304+0000 mgr.y (mgr.24310) 277 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:41.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:32:41 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:41] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:32:43.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:42 vm07 bash[17804]: cluster 2026-03-10T11:32:41.846749+0000 mgr.y (mgr.24310) 278 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:43.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:42 vm07 bash[17804]: audit 2026-03-10T11:32:42.169319+0000 mgr.y (mgr.24310) 279 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:43.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:42 vm05 bash[22470]: cluster 2026-03-10T11:32:41.846749+0000 mgr.y (mgr.24310) 278 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:43.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:42 vm05 bash[22470]: audit 2026-03-10T11:32:42.169319+0000 mgr.y (mgr.24310) 279 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:43.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:42 vm05 bash[17453]: cluster 2026-03-10T11:32:41.846749+0000 mgr.y (mgr.24310) 278 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:43.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:42 vm05 bash[17453]: audit 2026-03-10T11:32:42.169319+0000 mgr.y (mgr.24310) 279 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:43.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:43 vm05 bash[42794]: level=error ts=2026-03-10T11:32:43.528Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:43.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:43.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:43.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:43 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:43.529Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:44.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:32:43 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:43] "GET /metrics HTTP/1.1" 200 207600 "" "Prometheus/2.33.4" 2026-03-10T11:32:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:44 vm07 bash[17804]: cluster 2026-03-10T11:32:43.847074+0000 mgr.y (mgr.24310) 280 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:45.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:44 vm05 bash[17453]: cluster 2026-03-10T11:32:43.847074+0000 mgr.y (mgr.24310) 280 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:45.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:44 vm05 bash[22470]: cluster 2026-03-10T11:32:43.847074+0000 mgr.y (mgr.24310) 280 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:46 vm07 bash[17804]: cluster 2026-03-10T11:32:45.847450+0000 mgr.y (mgr.24310) 281 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:47.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:46 vm05 bash[22470]: cluster 2026-03-10T11:32:45.847450+0000 mgr.y (mgr.24310) 281 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:47.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:46 vm05 bash[17453]: cluster 2026-03-10T11:32:45.847450+0000 mgr.y (mgr.24310) 281 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:49.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:48 vm05 bash[17453]: cluster 2026-03-10T11:32:47.848039+0000 mgr.y (mgr.24310) 282 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:49.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:48 vm05 bash[22470]: cluster 2026-03-10T11:32:47.848039+0000 mgr.y (mgr.24310) 282 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:48 vm07 bash[17804]: cluster 2026-03-10T11:32:47.848039+0000 mgr.y (mgr.24310) 282 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:51.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:50 vm07 bash[17804]: cluster 2026-03-10T11:32:49.848333+0000 mgr.y (mgr.24310) 283 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:51.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:50 vm05 bash[22470]: cluster 2026-03-10T11:32:49.848333+0000 mgr.y (mgr.24310) 283 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:51.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:50 vm05 bash[17453]: cluster 2026-03-10T11:32:49.848333+0000 mgr.y (mgr.24310) 283 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:51.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:32:51 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:51] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:32:53.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:52 vm05 bash[22470]: cluster 2026-03-10T11:32:51.848877+0000 mgr.y (mgr.24310) 284 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:53.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:52 vm05 bash[22470]: audit 2026-03-10T11:32:52.178802+0000 mgr.y (mgr.24310) 285 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:53.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:52 vm05 bash[17453]: cluster 2026-03-10T11:32:51.848877+0000 mgr.y (mgr.24310) 284 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:53.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:52 vm05 bash[17453]: audit 2026-03-10T11:32:52.178802+0000 mgr.y (mgr.24310) 285 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:52 vm07 bash[17804]: cluster 2026-03-10T11:32:51.848877+0000 mgr.y (mgr.24310) 284 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:52 vm07 bash[17804]: audit 2026-03-10T11:32:52.178802+0000 mgr.y (mgr.24310) 285 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:32:53.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:53 vm05 bash[42794]: level=error ts=2026-03-10T11:32:53.528Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:53.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:53.530Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:32:53.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:32:53 vm05 bash[42794]: level=warn ts=2026-03-10T11:32:53.530Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:32:54.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:32:53 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:32:53] "GET /metrics HTTP/1.1" 200 207600 "" "Prometheus/2.33.4" 2026-03-10T11:32:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:54 vm05 bash[17453]: cluster 2026-03-10T11:32:53.849181+0000 mgr.y (mgr.24310) 286 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:54 vm05 bash[17453]: audit 2026-03-10T11:32:54.742211+0000 mon.b (mon.2) 123 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:32:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:54 vm05 bash[17453]: audit 2026-03-10T11:32:54.743186+0000 mon.b (mon.2) 124 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:54 vm05 bash[17453]: audit 2026-03-10T11:32:54.743786+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:54 vm05 bash[17453]: audit 2026-03-10T11:32:54.911323+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:54 vm05 bash[22470]: cluster 2026-03-10T11:32:53.849181+0000 mgr.y (mgr.24310) 286 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:54 vm05 bash[22470]: audit 2026-03-10T11:32:54.742211+0000 mon.b (mon.2) 123 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:54 vm05 bash[22470]: audit 2026-03-10T11:32:54.743186+0000 mon.b (mon.2) 124 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:54 vm05 bash[22470]: audit 2026-03-10T11:32:54.743786+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:32:55.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:54 vm05 bash[22470]: audit 2026-03-10T11:32:54.911323+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:32:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:54 vm07 bash[17804]: cluster 2026-03-10T11:32:53.849181+0000 mgr.y (mgr.24310) 286 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:54 vm07 bash[17804]: audit 2026-03-10T11:32:54.742211+0000 mon.b (mon.2) 123 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:32:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:54 vm07 bash[17804]: audit 2026-03-10T11:32:54.743186+0000 mon.b (mon.2) 124 : audit [DBG] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:32:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:54 vm07 bash[17804]: audit 2026-03-10T11:32:54.743786+0000 mon.b (mon.2) 125 : audit [INF] from='mgr.24310 192.168.123.105:0/2176784989' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:32:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:54 vm07 bash[17804]: audit 2026-03-10T11:32:54.911323+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24310 ' entity='mgr.y' 2026-03-10T11:32:57.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:57 vm05 bash[17453]: cluster 2026-03-10T11:32:55.849551+0000 mgr.y (mgr.24310) 287 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:57.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:57 vm05 bash[22470]: cluster 2026-03-10T11:32:55.849551+0000 mgr.y (mgr.24310) 287 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:57 vm07 bash[17804]: cluster 2026-03-10T11:32:55.849551+0000 mgr.y (mgr.24310) 287 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:32:59.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:32:58 vm05 bash[22470]: cluster 2026-03-10T11:32:57.850046+0000 mgr.y (mgr.24310) 288 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:59.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:32:58 vm05 bash[17453]: cluster 2026-03-10T11:32:57.850046+0000 mgr.y (mgr.24310) 288 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:32:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:32:58 vm07 bash[17804]: cluster 2026-03-10T11:32:57.850046+0000 mgr.y (mgr.24310) 288 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:01.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:01 vm05 bash[22470]: cluster 2026-03-10T11:32:59.850416+0000 mgr.y (mgr.24310) 289 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:01.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:01 vm05 bash[17453]: cluster 2026-03-10T11:32:59.850416+0000 mgr.y (mgr.24310) 289 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:01 vm07 bash[17804]: cluster 2026-03-10T11:32:59.850416+0000 mgr.y (mgr.24310) 289 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:01.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:01 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:01] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:33:03.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:03 vm05 bash[17453]: cluster 2026-03-10T11:33:01.850862+0000 mgr.y (mgr.24310) 290 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:03.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:03 vm05 bash[17453]: audit 2026-03-10T11:33:02.188950+0000 mgr.y (mgr.24310) 291 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:03.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:03 vm05 bash[22470]: cluster 2026-03-10T11:33:01.850862+0000 mgr.y (mgr.24310) 290 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:03.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:03 vm05 bash[22470]: audit 2026-03-10T11:33:02.188950+0000 mgr.y (mgr.24310) 291 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:03 vm07 bash[17804]: cluster 2026-03-10T11:33:01.850862+0000 mgr.y (mgr.24310) 290 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:03 vm07 bash[17804]: audit 2026-03-10T11:33:02.188950+0000 mgr.y (mgr.24310) 291 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:03 vm05 bash[42794]: level=error ts=2026-03-10T11:33:03.529Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:33:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:03.532Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:33:03.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:03 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:03.533Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:33:04.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:03 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:03] "GET /metrics HTTP/1.1" 200 207605 "" "Prometheus/2.33.4" 2026-03-10T11:33:05.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:05 vm05 bash[17453]: cluster 2026-03-10T11:33:03.851143+0000 mgr.y (mgr.24310) 292 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:05.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:05 vm05 bash[22470]: cluster 2026-03-10T11:33:03.851143+0000 mgr.y (mgr.24310) 292 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:05 vm07 bash[17804]: cluster 2026-03-10T11:33:03.851143+0000 mgr.y (mgr.24310) 292 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:07.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:07 vm05 bash[22470]: cluster 2026-03-10T11:33:05.851470+0000 mgr.y (mgr.24310) 293 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:07.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:07 vm05 bash[17453]: cluster 2026-03-10T11:33:05.851470+0000 mgr.y (mgr.24310) 293 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:07 vm07 bash[17804]: cluster 2026-03-10T11:33:05.851470+0000 mgr.y (mgr.24310) 293 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:09.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:08 vm05 bash[22470]: cluster 2026-03-10T11:33:07.852079+0000 mgr.y (mgr.24310) 294 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:09.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:08 vm05 bash[17453]: cluster 2026-03-10T11:33:07.852079+0000 mgr.y (mgr.24310) 294 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:08 vm07 bash[17804]: cluster 2026-03-10T11:33:07.852079+0000 mgr.y (mgr.24310) 294 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:10.402 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:33:10.852 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:33:10.852 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (6m) 115s ago 6m 15.8M - ba2b418f427c 3a344bc09343 2026-03-10T11:33:10.852 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (6m) 76s ago 6m 40.0M - 8.3.5 dad864ee21e9 a53e654c60d5 2026-03-10T11:33:10.852 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (6m) 115s ago 6m 42.1M - 3.5 e1d6a67b021e 7c51d6393d48 2026-03-10T11:33:10.852 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283 running (79s) 76s ago 9m 299M - 19.2.3-678-ge911bdeb 654f31e6858e 29cf7638c524 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:9283 running (10m) 115s ago 10m 449M - 17.2.0 e1d6a67b021e c74ea9550b91 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (10m) 115s ago 10m 45.2M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (9m) 76s ago 9m 39.2M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (9m) 115s ago 9m 37.3M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (6m) 115s ago 6m 10.2M - 1dbe0e931976 77163141ef6d 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (6m) 76s ago 6m 9543k - 1dbe0e931976 142eaa08cfb0 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (9m) 115s ago 9m 48.8M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (8m) 115s ago 8m 50.6M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (8m) 115s ago 8m 47.0M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (8m) 115s ago 8m 46.4M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (8m) 76s ago 8m 48.7M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (7m) 76s ago 7m 45.9M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (7m) 76s ago 7m 45.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (7m) 76s ago 7m 47.0M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (6m) 76s ago 6m 51.3M - 514e6a882f6e 979d30e0f128 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (6m) 115s ago 6m 82.1M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:33:10.853 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (6m) 76s ago 6m 82.7M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:33:10.911 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:33:11.163 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:10 vm05 bash[22470]: cluster 2026-03-10T11:33:09.852364+0000 mgr.y (mgr.24310) 295 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:11.163 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:10 vm05 bash[17453]: cluster 2026-03-10T11:33:09.852364+0000 mgr.y (mgr.24310) 295 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:11.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:10 vm07 bash[17804]: cluster 2026-03-10T11:33:09.852364+0000 mgr.y (mgr.24310) 295 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 14, 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:33:11.379 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:33:11.434 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: cluster: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: id: 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: health: HEALTH_OK 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: services: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: mon: 3 daemons, quorum a,c,b (age 9m) 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: mgr: y(active, since 6m), standbys: x 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: osd: 8 osds: 8 up (since 7m), 8 in (since 7m) 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: data: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: pools: 6 pools, 161 pgs 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: objects: 209 objects, 457 KiB 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: usage: 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: pgs: 161 active+clean 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: io: 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-10T11:33:11.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:33:11.923 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:11 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:11] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:33:11.932 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:11 vm05 bash[22470]: audit 2026-03-10T11:33:10.847187+0000 mgr.y (mgr.24310) 296 : audit [DBG] from='client.24739 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:33:11.932 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:11 vm05 bash[22470]: audit 2026-03-10T11:33:11.379930+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.105:0/486951217' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:33:11.932 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:11 vm05 bash[22470]: audit 2026-03-10T11:33:11.901733+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.105:0/4019666632' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:33:11.932 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:11 vm05 bash[17453]: audit 2026-03-10T11:33:10.847187+0000 mgr.y (mgr.24310) 296 : audit [DBG] from='client.24739 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:33:11.932 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:11 vm05 bash[17453]: audit 2026-03-10T11:33:11.379930+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.105:0/486951217' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:33:11.932 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:11 vm05 bash[17453]: audit 2026-03-10T11:33:11.901733+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.105:0/4019666632' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:33:11.967 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:33:12.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:11 vm07 bash[17804]: audit 2026-03-10T11:33:10.847187+0000 mgr.y (mgr.24310) 296 : audit [DBG] from='client.24739 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:33:12.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:11 vm07 bash[17804]: audit 2026-03-10T11:33:11.379930+0000 mon.a (mon.0) 769 : audit [DBG] from='client.? 192.168.123.105:0/486951217' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:33:12.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:11 vm07 bash[17804]: audit 2026-03-10T11:33:11.901733+0000 mon.a (mon.0) 770 : audit [DBG] from='client.? 192.168.123.105:0/4019666632' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:33:12.429 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:33:12.488 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 2'"'"'' 2026-03-10T11:33:12.962 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:33:13.008 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph mgr fail' 2026-03-10T11:33:13.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:12 vm05 bash[22470]: cluster 2026-03-10T11:33:11.852869+0000 mgr.y (mgr.24310) 297 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:13.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:12 vm05 bash[22470]: audit 2026-03-10T11:33:12.196534+0000 mgr.y (mgr.24310) 298 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:13.008 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:12 vm05 bash[22470]: audit 2026-03-10T11:33:12.430106+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.105:0/3859273978' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:33:13.008 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:12 vm05 bash[17453]: cluster 2026-03-10T11:33:11.852869+0000 mgr.y (mgr.24310) 297 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:13.009 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:12 vm05 bash[17453]: audit 2026-03-10T11:33:12.196534+0000 mgr.y (mgr.24310) 298 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:13.009 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:12 vm05 bash[17453]: audit 2026-03-10T11:33:12.430106+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.105:0/3859273978' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:33:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:12 vm07 bash[17804]: cluster 2026-03-10T11:33:11.852869+0000 mgr.y (mgr.24310) 297 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:12 vm07 bash[17804]: audit 2026-03-10T11:33:12.196534+0000 mgr.y (mgr.24310) 298 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:12 vm07 bash[17804]: audit 2026-03-10T11:33:12.430106+0000 mon.a (mon.0) 771 : audit [DBG] from='client.? 192.168.123.105:0/3859273978' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:33:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:13 vm05 bash[42794]: level=error ts=2026-03-10T11:33:13.530Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[1]: notify retry canceled after 7 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs; ceph-dashboard/webhook[0]: notify retry canceled after 7 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:33:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:13.532Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:33:13.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:13 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:13.532Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:33:14.039 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-10T11:33:14.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:13 vm07 bash[36672]: [10/Mar/2026:11:33:13] ENGINE Bus STOPPING 2026-03-10T11:33:14.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:14 vm07 bash[36672]: [10/Mar/2026:11:33:14] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:33:14.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:14 vm07 bash[36672]: [10/Mar/2026:11:33:14] ENGINE Bus STOPPED 2026-03-10T11:33:14.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:14 vm07 bash[36672]: [10/Mar/2026:11:33:14] ENGINE Bus STARTING 2026-03-10T11:33:14.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:13 vm07 bash[17804]: audit 2026-03-10T11:33:12.953120+0000 mon.b (mon.2) 126 : audit [DBG] from='client.? 192.168.123.105:0/3724280881' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:33:14.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:13 vm07 bash[17804]: audit 2026-03-10T11:33:13.439543+0000 mon.a (mon.0) 772 : audit [INF] from='client.? 192.168.123.105:0/3129278235' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:33:14.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:13 vm07 bash[17804]: cluster 2026-03-10T11:33:13.445238+0000 mon.a (mon.0) 773 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T11:33:14.211 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:13 vm05 bash[22470]: audit 2026-03-10T11:33:12.953120+0000 mon.b (mon.2) 126 : audit [DBG] from='client.? 192.168.123.105:0/3724280881' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:33:14.211 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:13 vm05 bash[22470]: audit 2026-03-10T11:33:13.439543+0000 mon.a (mon.0) 772 : audit [INF] from='client.? 192.168.123.105:0/3129278235' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:33:14.211 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:13 vm05 bash[22470]: cluster 2026-03-10T11:33:13.445238+0000 mon.a (mon.0) 773 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:13 vm05 bash[17453]: audit 2026-03-10T11:33:12.953120+0000 mon.b (mon.2) 126 : audit [DBG] from='client.? 192.168.123.105:0/3724280881' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:13 vm05 bash[17453]: audit 2026-03-10T11:33:13.439543+0000 mon.a (mon.0) 772 : audit [INF] from='client.? 192.168.123.105:0/3129278235' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:13 vm05 bash[17453]: cluster 2026-03-10T11:33:13.445238+0000 mon.a (mon.0) 773 : cluster [DBG] osdmap e82: 8 total, 8 up, 8 in 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:13 vm05 bash[17722]: debug 2026-03-10T11:33:13.936+0000 7f7a993e9700 -1 mgr handle_mgr_map I was active but no longer am 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:13 vm05 bash[17722]: ignoring --setuser ceph since I am not root 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:13 vm05 bash[17722]: ignoring --setgroup ceph since I am not root 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:14 vm05 bash[17722]: debug 2026-03-10T11:33:13.996+0000 7f88a5afe700 1 -- 192.168.123.105:0/4254207772 <== mon.2 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 194+0+0 (secure 0 0 0) 0x561b88c30340 con 0x561b88d36400 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:14 vm05 bash[17722]: debug 2026-03-10T11:33:14.080+0000 7f88ae55a000 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:33:14.212 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:14 vm05 bash[17722]: debug 2026-03-10T11:33:14.132+0000 7f88ae55a000 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:33:14.212 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:14.141Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=2 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": dial tcp 192.168.123.107:8443: connect: connection refused" 2026-03-10T11:33:14.212 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:14.168Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=2 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": dial tcp 192.168.123.105:8443: connect: connection refused" 2026-03-10T11:33:14.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:14 vm07 bash[36672]: [10/Mar/2026:11:33:14] ENGINE Serving on http://:::9283 2026-03-10T11:33:14.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:14 vm07 bash[36672]: [10/Mar/2026:11:33:14] ENGINE Bus STARTED 2026-03-10T11:33:14.844 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:14 vm05 bash[17722]: debug 2026-03-10T11:33:14.572+0000 7f88ae55a000 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:33:14.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:14.574Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=3 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.940709+0000 mon.a (mon.0) 774 : audit [INF] from='client.? 192.168.123.105:0/3129278235' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cluster 2026-03-10T11:33:13.940860+0000 mon.a (mon.0) 775 : cluster [DBG] mgrmap e23: x(active, starting, since 0.49985s) 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.950732+0000 mon.b (mon.2) 127 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.951035+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.951659+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.952171+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.952751+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.953333+0000 mon.b (mon.2) 132 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.953737+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.954115+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.954461+0000 mon.b (mon.2) 135 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.954810+0000 mon.b (mon.2) 136 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.955162+0000 mon.b (mon.2) 137 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.955505+0000 mon.b (mon.2) 138 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.955941+0000 mon.b (mon.2) 139 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.956328+0000 mon.b (mon.2) 140 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:13.956969+0000 mon.b (mon.2) 141 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cluster 2026-03-10T11:33:14.090072+0000 mon.a (mon.0) 776 : cluster [INF] Manager daemon x is now available 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.100908+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cephadm 2026-03-10T11:33:14.102730+0000 mgr.x (mgr.24733) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cephadm 2026-03-10T11:33:14.103056+0000 mgr.x (mgr.24733) 2 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.115530+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cephadm 2026-03-10T11:33:14.118185+0000 mgr.x (mgr.24733) 3 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cephadm 2026-03-10T11:33:14.118236+0000 mgr.x (mgr.24733) 4 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: cephadm 2026-03-10T11:33:14.118384+0000 mgr.x (mgr.24733) 5 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.127323+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.144972+0000 mon.b (mon.2) 142 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.150595+0000 mon.b (mon.2) 143 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.151814+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.151961+0000 mon.b (mon.2) 144 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.189683+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:33:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:14 vm07 bash[17804]: audit 2026-03-10T11:33:14.189733+0000 mon.b (mon.2) 145 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.940709+0000 mon.a (mon.0) 774 : audit [INF] from='client.? 192.168.123.105:0/3129278235' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cluster 2026-03-10T11:33:13.940860+0000 mon.a (mon.0) 775 : cluster [DBG] mgrmap e23: x(active, starting, since 0.49985s) 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.950732+0000 mon.b (mon.2) 127 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.951035+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.951659+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.952171+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.952751+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.953333+0000 mon.b (mon.2) 132 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.953737+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.954115+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.954461+0000 mon.b (mon.2) 135 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.954810+0000 mon.b (mon.2) 136 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.955162+0000 mon.b (mon.2) 137 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.955505+0000 mon.b (mon.2) 138 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.955941+0000 mon.b (mon.2) 139 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.956328+0000 mon.b (mon.2) 140 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:33:15.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:13.956969+0000 mon.b (mon.2) 141 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cluster 2026-03-10T11:33:14.090072+0000 mon.a (mon.0) 776 : cluster [INF] Manager daemon x is now available 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.100908+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cephadm 2026-03-10T11:33:14.102730+0000 mgr.x (mgr.24733) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cephadm 2026-03-10T11:33:14.103056+0000 mgr.x (mgr.24733) 2 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.115530+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cephadm 2026-03-10T11:33:14.118185+0000 mgr.x (mgr.24733) 3 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cephadm 2026-03-10T11:33:14.118236+0000 mgr.x (mgr.24733) 4 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: cephadm 2026-03-10T11:33:14.118384+0000 mgr.x (mgr.24733) 5 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.127323+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.144972+0000 mon.b (mon.2) 142 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.150595+0000 mon.b (mon.2) 143 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.151814+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.151961+0000 mon.b (mon.2) 144 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.189683+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:14 vm05 bash[22470]: audit 2026-03-10T11:33:14.189733+0000 mon.b (mon.2) 145 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.108+0000 7f88ae55a000 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.940709+0000 mon.a (mon.0) 774 : audit [INF] from='client.? 192.168.123.105:0/3129278235' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cluster 2026-03-10T11:33:13.940860+0000 mon.a (mon.0) 775 : cluster [DBG] mgrmap e23: x(active, starting, since 0.49985s) 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.950732+0000 mon.b (mon.2) 127 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.951035+0000 mon.b (mon.2) 128 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.951659+0000 mon.b (mon.2) 129 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.952171+0000 mon.b (mon.2) 130 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.952751+0000 mon.b (mon.2) 131 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.953333+0000 mon.b (mon.2) 132 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.953737+0000 mon.b (mon.2) 133 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.954115+0000 mon.b (mon.2) 134 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.954461+0000 mon.b (mon.2) 135 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.954810+0000 mon.b (mon.2) 136 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.955162+0000 mon.b (mon.2) 137 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.955505+0000 mon.b (mon.2) 138 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.955941+0000 mon.b (mon.2) 139 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.956328+0000 mon.b (mon.2) 140 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:13.956969+0000 mon.b (mon.2) 141 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cluster 2026-03-10T11:33:14.090072+0000 mon.a (mon.0) 776 : cluster [INF] Manager daemon x is now available 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.100908+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cephadm 2026-03-10T11:33:14.102730+0000 mgr.x (mgr.24733) 1 : cephadm [INF] Queued rgw.foo for migration 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cephadm 2026-03-10T11:33:14.103056+0000 mgr.x (mgr.24733) 2 : cephadm [INF] No Migration is needed for rgw spec: {'placement': {'count': 2}, 'service_id': 'foo', 'service_name': 'rgw.foo', 'service_type': 'rgw', 'spec': {'rgw_frontend_port': 8000, 'rgw_realm': 'r', 'rgw_zone': 'z'}} 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.115530+0000 mon.a (mon.0) 778 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cephadm 2026-03-10T11:33:14.118185+0000 mgr.x (mgr.24733) 3 : cephadm [INF] Migrating certs/keys for iscsi.foo spec to cert store 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cephadm 2026-03-10T11:33:14.118236+0000 mgr.x (mgr.24733) 4 : cephadm [INF] Migrating certs/keys for rgw.foo spec to cert store 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: cephadm 2026-03-10T11:33:14.118384+0000 mgr.x (mgr.24733) 5 : cephadm [INF] Checking for cert/key for grafana.a 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.127323+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.144972+0000 mon.b (mon.2) 142 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.150595+0000 mon.b (mon.2) 143 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.151814+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.151961+0000 mon.b (mon.2) 144 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.189683+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:33:15.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:14 vm05 bash[17453]: audit 2026-03-10T11:33:14.189733+0000 mon.b (mon.2) 145 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:33:15.547 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.220+0000 7f88ae55a000 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:33:15.548 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.436+0000 7f88ae55a000 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:33:15.798 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.544+0000 7f88ae55a000 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:33:15.799 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.600+0000 7f88ae55a000 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:33:15.799 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.736+0000 7f88ae55a000 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:33:15.799 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.792+0000 7f88ae55a000 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:15 vm05 bash[22470]: cephadm 2026-03-10T11:33:14.707091+0000 mgr.x (mgr.24733) 6 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:15 vm05 bash[22470]: cluster 2026-03-10T11:33:14.954911+0000 mon.a (mon.0) 782 : cluster [DBG] mgrmap e24: x(active, since 1.51389s) 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:15 vm05 bash[22470]: cluster 2026-03-10T11:33:14.980321+0000 mgr.x (mgr.24733) 7 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:15 vm05 bash[22470]: audit 2026-03-10T11:33:15.007278+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:15 vm05 bash[22470]: audit 2026-03-10T11:33:15.014667+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:15 vm05 bash[22470]: cephadm 2026-03-10T11:33:15.118892+0000 mgr.x (mgr.24733) 8 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:15 vm05 bash[17722]: debug 2026-03-10T11:33:15.864+0000 7f88ae55a000 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:15 vm05 bash[17453]: cephadm 2026-03-10T11:33:14.707091+0000 mgr.x (mgr.24733) 6 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:15 vm05 bash[17453]: cluster 2026-03-10T11:33:14.954911+0000 mon.a (mon.0) 782 : cluster [DBG] mgrmap e24: x(active, since 1.51389s) 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:15 vm05 bash[17453]: cluster 2026-03-10T11:33:14.980321+0000 mgr.x (mgr.24733) 7 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:15 vm05 bash[17453]: audit 2026-03-10T11:33:15.007278+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:15 vm05 bash[17453]: audit 2026-03-10T11:33:15.014667+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:16.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:15 vm05 bash[17453]: cephadm 2026-03-10T11:33:15.118892+0000 mgr.x (mgr.24733) 8 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T11:33:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:15 vm07 bash[17804]: cephadm 2026-03-10T11:33:14.707091+0000 mgr.x (mgr.24733) 6 : cephadm [INF] Deploying cephadm binary to vm07 2026-03-10T11:33:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:15 vm07 bash[17804]: cluster 2026-03-10T11:33:14.954911+0000 mon.a (mon.0) 782 : cluster [DBG] mgrmap e24: x(active, since 1.51389s) 2026-03-10T11:33:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:15 vm07 bash[17804]: cluster 2026-03-10T11:33:14.980321+0000 mgr.x (mgr.24733) 7 : cluster [DBG] pgmap v3: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:15 vm07 bash[17804]: audit 2026-03-10T11:33:15.007278+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:15 vm07 bash[17804]: audit 2026-03-10T11:33:15.014667+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:15 vm07 bash[17804]: cephadm 2026-03-10T11:33:15.118892+0000 mgr.x (mgr.24733) 8 : cephadm [INF] Deploying cephadm binary to vm05 2026-03-10T11:33:16.844 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:16 vm05 bash[17722]: debug 2026-03-10T11:33:16.404+0000 7f88ae55a000 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:33:16.844 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:16 vm05 bash[17722]: debug 2026-03-10T11:33:16.468+0000 7f88ae55a000 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:33:16.844 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:16 vm05 bash[17722]: debug 2026-03-10T11:33:16.528+0000 7f88ae55a000 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:16 vm05 bash[22470]: cephadm 2026-03-10T11:33:15.703196+0000 mgr.x (mgr.24733) 9 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Bus STARTING 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:16 vm05 bash[22470]: cephadm 2026-03-10T11:33:15.804779+0000 mgr.x (mgr.24733) 10 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:16 vm05 bash[22470]: cephadm 2026-03-10T11:33:15.915212+0000 mgr.x (mgr.24733) 11 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:16 vm05 bash[22470]: cephadm 2026-03-10T11:33:15.915337+0000 mgr.x (mgr.24733) 12 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Bus STARTED 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:16 vm05 bash[22470]: cephadm 2026-03-10T11:33:15.915783+0000 mgr.x (mgr.24733) 13 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Client ('192.168.123.107', 53700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:16 vm05 bash[22470]: cluster 2026-03-10T11:33:15.953296+0000 mgr.x (mgr.24733) 14 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:16 vm05 bash[17722]: debug 2026-03-10T11:33:16.872+0000 7f88ae55a000 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:16 vm05 bash[17722]: debug 2026-03-10T11:33:16.932+0000 7f88ae55a000 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.016+0000 7f88ae55a000 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.100+0000 7f88ae55a000 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:16 vm05 bash[17453]: cephadm 2026-03-10T11:33:15.703196+0000 mgr.x (mgr.24733) 9 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Bus STARTING 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:16 vm05 bash[17453]: cephadm 2026-03-10T11:33:15.804779+0000 mgr.x (mgr.24733) 10 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:16 vm05 bash[17453]: cephadm 2026-03-10T11:33:15.915212+0000 mgr.x (mgr.24733) 11 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:16 vm05 bash[17453]: cephadm 2026-03-10T11:33:15.915337+0000 mgr.x (mgr.24733) 12 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Bus STARTED 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:16 vm05 bash[17453]: cephadm 2026-03-10T11:33:15.915783+0000 mgr.x (mgr.24733) 13 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Client ('192.168.123.107', 53700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:33:17.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:16 vm05 bash[17453]: cluster 2026-03-10T11:33:15.953296+0000 mgr.x (mgr.24733) 14 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:16 vm07 bash[17804]: cephadm 2026-03-10T11:33:15.703196+0000 mgr.x (mgr.24733) 9 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Bus STARTING 2026-03-10T11:33:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:16 vm07 bash[17804]: cephadm 2026-03-10T11:33:15.804779+0000 mgr.x (mgr.24733) 10 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T11:33:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:16 vm07 bash[17804]: cephadm 2026-03-10T11:33:15.915212+0000 mgr.x (mgr.24733) 11 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T11:33:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:16 vm07 bash[17804]: cephadm 2026-03-10T11:33:15.915337+0000 mgr.x (mgr.24733) 12 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Bus STARTED 2026-03-10T11:33:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:16 vm07 bash[17804]: cephadm 2026-03-10T11:33:15.915783+0000 mgr.x (mgr.24733) 13 : cephadm [INF] [10/Mar/2026:11:33:15] ENGINE Client ('192.168.123.107', 53700) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:33:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:16 vm07 bash[17804]: cluster 2026-03-10T11:33:15.953296+0000 mgr.x (mgr.24733) 14 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:17.725 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.416+0000 7f88ae55a000 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:33:17.725 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.600+0000 7f88ae55a000 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:33:17.725 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.660+0000 7f88ae55a000 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:33:18.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:17 vm05 bash[22470]: cluster 2026-03-10T11:33:16.977335+0000 mon.a (mon.0) 785 : cluster [DBG] mgrmap e25: x(active, since 3s) 2026-03-10T11:33:18.094 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.720+0000 7f88ae55a000 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:33:18.094 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:17 vm05 bash[17722]: debug 2026-03-10T11:33:17.884+0000 7f88ae55a000 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:33:18.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:17 vm05 bash[17453]: cluster 2026-03-10T11:33:16.977335+0000 mon.a (mon.0) 785 : cluster [DBG] mgrmap e25: x(active, since 3s) 2026-03-10T11:33:18.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:17 vm07 bash[17804]: cluster 2026-03-10T11:33:16.977335+0000 mon.a (mon.0) 785 : cluster [DBG] mgrmap e25: x(active, since 3s) 2026-03-10T11:33:18.820 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:18 vm05 bash[17722]: debug 2026-03-10T11:33:18.360+0000 7f88ae55a000 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:33:18.820 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:18 vm05 bash[17722]: [10/Mar/2026:11:33:18] ENGINE Bus STARTING 2026-03-10T11:33:18.820 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:18 vm05 bash[17722]: CherryPy Checker: 2026-03-10T11:33:18.820 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:18 vm05 bash[17722]: The Application mounted at '' has an empty config. 2026-03-10T11:33:18.820 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:18 vm05 bash[17722]: [10/Mar/2026:11:33:18] ENGINE Serving on http://:::9283 2026-03-10T11:33:18.820 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:18 vm05 bash[17722]: [10/Mar/2026:11:33:18] ENGINE Bus STARTED 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:18 vm05 bash[22470]: cluster 2026-03-10T11:33:17.953636+0000 mgr.x (mgr.24733) 15 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:18 vm05 bash[22470]: cluster 2026-03-10T11:33:18.367419+0000 mon.a (mon.0) 786 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:18 vm05 bash[22470]: audit 2026-03-10T11:33:18.372308+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:18 vm05 bash[22470]: audit 2026-03-10T11:33:18.373290+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:18 vm05 bash[22470]: audit 2026-03-10T11:33:18.375222+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:18 vm05 bash[22470]: audit 2026-03-10T11:33:18.376591+0000 mon.c (mon.1) 42 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:18.820Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=6 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[17453]: cluster 2026-03-10T11:33:17.953636+0000 mgr.x (mgr.24733) 15 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[17453]: cluster 2026-03-10T11:33:18.367419+0000 mon.a (mon.0) 786 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[17453]: audit 2026-03-10T11:33:18.372308+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[17453]: audit 2026-03-10T11:33:18.373290+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[17453]: audit 2026-03-10T11:33:18.375222+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:33:19.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:18 vm05 bash[17453]: audit 2026-03-10T11:33:18.376591+0000 mon.c (mon.1) 42 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:33:19.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:18 vm07 bash[17804]: cluster 2026-03-10T11:33:17.953636+0000 mgr.x (mgr.24733) 15 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:18 vm07 bash[17804]: cluster 2026-03-10T11:33:18.367419+0000 mon.a (mon.0) 786 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:33:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:18 vm07 bash[17804]: audit 2026-03-10T11:33:18.372308+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:33:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:18 vm07 bash[17804]: audit 2026-03-10T11:33:18.373290+0000 mon.c (mon.1) 40 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:33:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:18 vm07 bash[17804]: audit 2026-03-10T11:33:18.375222+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:33:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:18 vm07 bash[17804]: audit 2026-03-10T11:33:18.376591+0000 mon.c (mon.1) 42 : audit [DBG] from='mgr.? 192.168.123.105:0/4107265686' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:33:20.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:20 vm05 bash[22470]: audit 2026-03-10T11:33:18.996581+0000 mon.b (mon.2) 146 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:33:20.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:20 vm05 bash[22470]: cluster 2026-03-10T11:33:19.001207+0000 mon.a (mon.0) 787 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-10T11:33:20.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:20 vm05 bash[17453]: audit 2026-03-10T11:33:18.996581+0000 mon.b (mon.2) 146 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:33:20.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:20 vm05 bash[17453]: cluster 2026-03-10T11:33:19.001207+0000 mon.a (mon.0) 787 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-10T11:33:20.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:20 vm07 bash[17804]: audit 2026-03-10T11:33:18.996581+0000 mon.b (mon.2) 146 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:33:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:20 vm07 bash[17804]: cluster 2026-03-10T11:33:19.001207+0000 mon.a (mon.0) 787 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: cluster 2026-03-10T11:33:19.953978+0000 mgr.x (mgr.24733) 16 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:20.437521+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:20.444293+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:20.935210+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:20.945769+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:21.051845+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:21.058111+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:21.061386+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:21 vm05 bash[22470]: audit 2026-03-10T11:33:21.061504+0000 mon.b (mon.2) 147 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: cluster 2026-03-10T11:33:19.953978+0000 mgr.x (mgr.24733) 16 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:20.437521+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:20.444293+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:20.935210+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:20.945769+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:21.051845+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:21.058111+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:21.061386+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:21.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:21 vm05 bash[17453]: audit 2026-03-10T11:33:21.061504+0000 mon.b (mon.2) 147 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:21 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:21] "GET /metrics HTTP/1.1" 200 34538 "" "Prometheus/2.33.4" 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: cluster 2026-03-10T11:33:19.953978+0000 mgr.x (mgr.24733) 16 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:20.437521+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:20.444293+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:20.935210+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:20.945769+0000 mon.a (mon.0) 791 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:21.051845+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:21.058111+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:21.061386+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:21 vm07 bash[17804]: audit 2026-03-10T11:33:21.061504+0000 mon.b (mon.2) 147 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.559884+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.566145+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.568207+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.568323+0000 mon.b (mon.2) 148 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.569546+0000 mon.b (mon.2) 149 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.570105+0000 mon.b (mon.2) 150 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.570914+0000 mgr.x (mgr.24733) 17 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.571019+0000 mgr.x (mgr.24733) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.612496+0000 mgr.x (mgr.24733) 19 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.613517+0000 mgr.x (mgr.24733) 20 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.650382+0000 mgr.x (mgr.24733) 21 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.652033+0000 mgr.x (mgr.24733) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.686739+0000 mgr.x (mgr.24733) 23 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.690296+0000 mgr.x (mgr.24733) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.728478+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.733111+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.736984+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.741975+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.746827+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.761647+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.765578+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.769337+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:21.772765+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.773927+0000 mgr.x (mgr.24733) 25 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T11:33:22.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cephadm 2026-03-10T11:33:21.779355+0000 mgr.x (mgr.24733) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm05 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: cluster 2026-03-10T11:33:21.954493+0000 mgr.x (mgr.24733) 27 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:22 vm05 bash[17453]: audit 2026-03-10T11:33:22.202623+0000 mgr.x (mgr.24733) 28 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.559884+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.566145+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.568207+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.568323+0000 mon.b (mon.2) 148 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.569546+0000 mon.b (mon.2) 149 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.570105+0000 mon.b (mon.2) 150 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.570914+0000 mgr.x (mgr.24733) 17 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.571019+0000 mgr.x (mgr.24733) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.612496+0000 mgr.x (mgr.24733) 19 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.613517+0000 mgr.x (mgr.24733) 20 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.650382+0000 mgr.x (mgr.24733) 21 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.652033+0000 mgr.x (mgr.24733) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.686739+0000 mgr.x (mgr.24733) 23 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.690296+0000 mgr.x (mgr.24733) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.728478+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.733111+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.736984+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.741975+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.746827+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.761647+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.765578+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.769337+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:21.772765+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.773927+0000 mgr.x (mgr.24733) 25 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cephadm 2026-03-10T11:33:21.779355+0000 mgr.x (mgr.24733) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm05 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: cluster 2026-03-10T11:33:21.954493+0000 mgr.x (mgr.24733) 27 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:33:22.845 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:22 vm05 bash[22470]: audit 2026-03-10T11:33:22.202623+0000 mgr.x (mgr.24733) 28 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.559884+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.566145+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.568207+0000 mon.a (mon.0) 797 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.568323+0000 mon.b (mon.2) 148 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.569546+0000 mon.b (mon.2) 149 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.570105+0000 mon.b (mon.2) 150 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.570914+0000 mgr.x (mgr.24733) 17 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.571019+0000 mgr.x (mgr.24733) 18 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.612496+0000 mgr.x (mgr.24733) 19 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.613517+0000 mgr.x (mgr.24733) 20 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.650382+0000 mgr.x (mgr.24733) 21 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.652033+0000 mgr.x (mgr.24733) 22 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.686739+0000 mgr.x (mgr.24733) 23 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.690296+0000 mgr.x (mgr.24733) 24 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.728478+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.733111+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.736984+0000 mon.a (mon.0) 800 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.741975+0000 mon.a (mon.0) 801 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.746827+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.761647+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.765578+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.769337+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:21.772765+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.773927+0000 mgr.x (mgr.24733) 25 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cephadm 2026-03-10T11:33:21.779355+0000 mgr.x (mgr.24733) 26 : cephadm [INF] Deploying daemon alertmanager.a on vm05 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: cluster 2026-03-10T11:33:21.954493+0000 mgr.x (mgr.24733) 27 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:33:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:22 vm07 bash[17804]: audit 2026-03-10T11:33:22.202623+0000 mgr.x (mgr.24733) 28 : audit [DBG] from='client.24592 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:23 vm05 bash[42794]: level=error ts=2026-03-10T11:33:23.531Z caller=dispatch.go:354 component=dispatcher msg="Notify for alerts failed" num_alerts=10 err="ceph-dashboard/webhook[0]: notify retry canceled after 8 attempts: Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs; ceph-dashboard/webhook[1]: notify retry canceled after 8 attempts: Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:33:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:23.533Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.107:8443/api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.107 because it doesn't contain any IP SANs" 2026-03-10T11:33:23.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:23 vm05 bash[42794]: level=warn ts=2026-03-10T11:33:23.533Z caller=notify.go:724 component=dispatcher receiver=ceph-dashboard integration=webhook[0] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"https://192.168.123.105:8443//api/prometheus_receiver\": x509: cannot validate certificate for 192.168.123.105 because it doesn't contain any IP SANs" 2026-03-10T11:33:24.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:23 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:23] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:33:25.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:25 vm05 bash[22470]: cluster 2026-03-10T11:33:23.954861+0000 mgr.x (mgr.24733) 29 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:33:25.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:25 vm05 bash[17453]: cluster 2026-03-10T11:33:23.954861+0000 mgr.x (mgr.24733) 29 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:33:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:25 vm07 bash[17804]: cluster 2026-03-10T11:33:23.954861+0000 mgr.x (mgr.24733) 29 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:33:26.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.094 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.094 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.095 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.095 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.095 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.095 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.095 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.346 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.346 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.346 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.347 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.347 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.347 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: Stopping Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:33:26.347 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[42794]: level=info ts=2026-03-10T11:33:26.137Z caller=main.go:557 msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:33:26.347 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50784]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-alertmanager-a 2026-03-10T11:33:26.347 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@alertmanager.a.service: Deactivated successfully. 2026-03-10T11:33:26.347 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: Stopped Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:26.347 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.347 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:26.746 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: Started Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:26.746 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.561Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-10T11:33:26.746 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.561Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-10T11:33:26.746 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.564Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.105 port=9094 2026-03-10T11:33:26.746 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.565Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-10T11:33:26.746 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.590Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T11:33:26.747 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.590Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-10T11:33:26.747 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.592Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-10T11:33:26.747 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 bash[50896]: ts=2026-03-10T11:33:26.592Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-10T11:33:27.025 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: Stopping Ceph node-exporter.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:33:27.026 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.026 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:33:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.314 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.314 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.314 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.314 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.314 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[51017]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-node-exporter-a 2026-03-10T11:33:27.315 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:33:27.315 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: Stopped Ceph node-exporter.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: Started Ceph node-exporter.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:27.315 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:27 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: cluster 2026-03-10T11:33:25.955403+0000 mgr.x (mgr.24733) 30 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:33:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:26.460228+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:26.471424+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: cephadm 2026-03-10T11:33:26.474631+0000 mgr.x (mgr.24733) 31 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-10T11:33:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: cephadm 2026-03-10T11:33:26.474892+0000 mgr.x (mgr.24733) 32 : cephadm [INF] Deploying daemon node-exporter.a on vm05 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:27.306143+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:27.313230+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:27.315435+0000 mon.b (mon.2) 151 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:27.315728+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[17453]: audit 2026-03-10T11:33:27.319294+0000 mon.b (mon.2) 152 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:27.590 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:27 vm05 bash[51128]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: cluster 2026-03-10T11:33:25.955403+0000 mgr.x (mgr.24733) 30 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:26.460228+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:26.471424+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: cephadm 2026-03-10T11:33:26.474631+0000 mgr.x (mgr.24733) 31 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: cephadm 2026-03-10T11:33:26.474892+0000 mgr.x (mgr.24733) 32 : cephadm [INF] Deploying daemon node-exporter.a on vm05 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:27.306143+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:27.313230+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:27.315435+0000 mon.b (mon.2) 151 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:27.315728+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:33:27.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:27 vm05 bash[22470]: audit 2026-03-10T11:33:27.319294+0000 mon.b (mon.2) 152 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:27.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: cluster 2026-03-10T11:33:25.955403+0000 mgr.x (mgr.24733) 30 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:26.460228+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:26.471424+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: cephadm 2026-03-10T11:33:26.474631+0000 mgr.x (mgr.24733) 31 : cephadm [INF] Reconfiguring node-exporter.a (dependencies changed)... 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: cephadm 2026-03-10T11:33:26.474892+0000 mgr.x (mgr.24733) 32 : cephadm [INF] Deploying daemon node-exporter.a on vm05 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:27.306143+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:27.313230+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:27.315435+0000 mon.b (mon.2) 151 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:27.315728+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:33:27.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:27 vm07 bash[17804]: audit 2026-03-10T11:33:27.319294+0000 mon.b (mon.2) 152 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:28.463 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: Stopping Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:33:28.463 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37888]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-grafana.a 2026-03-10T11:33:28.463 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[33470]: t=2026-03-10T11:33:28+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: cephadm 2026-03-10T11:33:27.315108+0000 mgr.x (mgr.24733) 33 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: cephadm 2026-03-10T11:33:27.320105+0000 mgr.x (mgr.24733) 34 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:27.857770+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:27.864050+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: cephadm 2026-03-10T11:33:27.865572+0000 mgr.x (mgr.24733) 35 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: cephadm 2026-03-10T11:33:27.870127+0000 mgr.x (mgr.24733) 36 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:27.906529+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:27.919421+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:27.921450+0000 mon.b (mon.2) 153 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:27.921946+0000 mgr.x (mgr.24733) 37 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: cephadm 2026-03-10T11:33:27.924365+0000 mgr.x (mgr.24733) 38 : cephadm [INF] Reconfiguring daemon grafana.a on vm07 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: cluster 2026-03-10T11:33:27.955659+0000 mgr.x (mgr.24733) 39 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:33:28.724 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 bash[17804]: audit 2026-03-10T11:33:28.380254+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.105:0/3613889999' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37895]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-grafana-a 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37929]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-grafana.a 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@grafana.a.service: Deactivated successfully. 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: Stopped Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: Started Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="The state of unified alerting is still not defined. The decision will be made during as we run the database migrations" logger=settings 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=warn msg="falling back to legacy setting of 'min_interval_seconds'; please use the configuration option in the `unified_alerting` section if Grafana 8 alerts are enabled." logger=settings 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Config loaded from" logger=settings file=/usr/share/grafana/conf/defaults.ini 2026-03-10T11:33:28.724 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Config loaded from" logger=settings file=/etc/grafana/grafana.ini 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Config overridden from Environment variable" logger=settings var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Path Home" logger=settings path=/usr/share/grafana 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Path Data" logger=settings path=/var/lib/grafana 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Path Logs" logger=settings path=/var/log/grafana 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Path Plugins" logger=settings path=/var/lib/grafana/plugins 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Path Provisioning" logger=settings path=/etc/grafana/provisioning 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="App mode production" logger=settings 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Connecting to DB" logger=sqlstore dbtype=sqlite3 2026-03-10T11:33:28.725 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=warn msg="SQLite database file has broader permissions than it should" logger=sqlstore path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: cephadm 2026-03-10T11:33:27.315108+0000 mgr.x (mgr.24733) 33 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: cephadm 2026-03-10T11:33:27.320105+0000 mgr.x (mgr.24733) 34 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:27.857770+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:27.864050+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: cephadm 2026-03-10T11:33:27.865572+0000 mgr.x (mgr.24733) 35 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: cephadm 2026-03-10T11:33:27.870127+0000 mgr.x (mgr.24733) 36 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:27.906529+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:27.919421+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:27.921450+0000 mon.b (mon.2) 153 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:27.921946+0000 mgr.x (mgr.24733) 37 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: cephadm 2026-03-10T11:33:27.924365+0000 mgr.x (mgr.24733) 38 : cephadm [INF] Reconfiguring daemon grafana.a on vm07 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: cluster 2026-03-10T11:33:27.955659+0000 mgr.x (mgr.24733) 39 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:28 vm05 bash[22470]: audit 2026-03-10T11:33:28.380254+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.105:0/3613889999' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: cephadm 2026-03-10T11:33:27.315108+0000 mgr.x (mgr.24733) 33 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: cephadm 2026-03-10T11:33:27.320105+0000 mgr.x (mgr.24733) 34 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:27.857770+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:27.864050+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: cephadm 2026-03-10T11:33:27.865572+0000 mgr.x (mgr.24733) 35 : cephadm [INF] Reconfiguring grafana.a (dependencies changed)... 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: cephadm 2026-03-10T11:33:27.870127+0000 mgr.x (mgr.24733) 36 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:27.906529+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:27.919421+0000 mon.a (mon.0) 815 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:27.921450+0000 mon.b (mon.2) 153 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:27.921946+0000 mgr.x (mgr.24733) 37 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: cephadm 2026-03-10T11:33:27.924365+0000 mgr.x (mgr.24733) 38 : cephadm [INF] Reconfiguring daemon grafana.a on vm07 2026-03-10T11:33:28.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: cluster 2026-03-10T11:33:27.955659+0000 mgr.x (mgr.24733) 39 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:33:28.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[17453]: audit 2026-03-10T11:33:28.380254+0000 mon.c (mon.1) 43 : audit [DBG] from='client.? 192.168.123.105:0/3613889999' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:33:28.845 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[51128]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T11:33:28.845 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:28 vm05 bash[50896]: ts=2026-03-10T11:33:28.565Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000628738s 2026-03-10T11:33:29.000 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: Stopping Ceph node-exporter.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:33:29.000 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Starting DB migrations" logger=migrator 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="migrations completed" logger=migrator performed=0 skipped=377 duration=322.816µs 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Created default organization" logger=sqlstore 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Initialising plugins" logger=plugin.manager 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=input 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=grafana-piechart-panel 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Plugin registered" logger=plugin.manager pluginId=vonage-status-panel 2026-03-10T11:33:29.000 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="Live Push Gateway initialization" logger=live.push_http 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="deleted datasource based on configuration" logger=provisioning.datasources name=Dashboard1 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="inserting datasource from configuration " logger=provisioning.datasources name=Loki uid=P8E80F9AEF21F6940 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="HTTP Server Listen" logger=http.server address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="warming cache for startup" logger=ngalert 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 bash[37956]: t=2026-03-10T11:33:28+0000 lvl=info msg="starting MultiOrg Alertmanager" logger=ngalert.multiorg.alertmanager 2026-03-10T11:33:29.001 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.001 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.250 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.250 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.250 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.250 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[38069]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-node-exporter-b 2026-03-10T11:33:29.250 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:33:29.250 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T11:33:29.250 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: Stopped Ceph node-exporter.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:29.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 2abcce694348: Pulling fs layer 2026-03-10T11:33:29.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 455fd88e5221: Pulling fs layer 2026-03-10T11:33:29.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 324153f2810a: Pulling fs layer 2026-03-10T11:33:29.529 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 systemd[1]: Started Ceph node-exporter.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:29.530 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[38182]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:28.528182+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:28.533861+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: cephadm 2026-03-10T11:33:28.535692+0000 mgr.x (mgr.24733) 40 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: cephadm 2026-03-10T11:33:28.536005+0000 mgr.x (mgr.24733) 41 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:28.603037+0000 mon.c (mon.1) 44 : audit [INF] from='client.? 192.168.123.105:0/1661934381' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]: dispatch 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:28.603543+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]: dispatch 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:29.151815+0000 mon.b (mon.2) 154 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:29.343974+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: audit 2026-03-10T11:33:29.351704+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:29 vm05 bash[22470]: cephadm 2026-03-10T11:33:29.353568+0000 mgr.x (mgr.24733) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:28.528182+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:28.533861+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: cephadm 2026-03-10T11:33:28.535692+0000 mgr.x (mgr.24733) 40 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: cephadm 2026-03-10T11:33:28.536005+0000 mgr.x (mgr.24733) 41 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:28.603037+0000 mon.c (mon.1) 44 : audit [INF] from='client.? 192.168.123.105:0/1661934381' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]: dispatch 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:28.603543+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]: dispatch 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:29.151815+0000 mon.b (mon.2) 154 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:29.343974+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: audit 2026-03-10T11:33:29.351704+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[17453]: cephadm 2026-03-10T11:33:29.353568+0000 mgr.x (mgr.24733) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:33:29.844 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 455fd88e5221: Verifying Checksum 2026-03-10T11:33:29.844 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 455fd88e5221: Download complete 2026-03-10T11:33:29.844 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 2abcce694348: Verifying Checksum 2026-03-10T11:33:29.844 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 2abcce694348: Download complete 2026-03-10T11:33:29.845 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 2abcce694348: Pull complete 2026-03-10T11:33:29.845 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 324153f2810a: Verifying Checksum 2026-03-10T11:33:29.845 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 324153f2810a: Download complete 2026-03-10T11:33:29.845 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 455fd88e5221: Pull complete 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:28.528182+0000 mon.a (mon.0) 816 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:28.533861+0000 mon.a (mon.0) 817 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: cephadm 2026-03-10T11:33:28.535692+0000 mgr.x (mgr.24733) 40 : cephadm [INF] Reconfiguring node-exporter.b (dependencies changed)... 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: cephadm 2026-03-10T11:33:28.536005+0000 mgr.x (mgr.24733) 41 : cephadm [INF] Deploying daemon node-exporter.b on vm07 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:28.603037+0000 mon.c (mon.1) 44 : audit [INF] from='client.? 192.168.123.105:0/1661934381' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]: dispatch 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:28.603543+0000 mon.a (mon.0) 818 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]: dispatch 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:29.151815+0000 mon.b (mon.2) 154 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:29.343974+0000 mon.a (mon.0) 819 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: audit 2026-03-10T11:33:29.351704+0000 mon.a (mon.0) 820 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:29.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:29 vm07 bash[17804]: cephadm 2026-03-10T11:33:29.353568+0000 mgr.x (mgr.24733) 42 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:33:30.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: 324153f2810a: Pull complete 2026-03-10T11:33:30.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T11:33:30.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:29 vm05 bash[51128]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T11:33:30.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.039Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T11:33:30.344 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.039Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.039Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T11:33:30.345 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[51128]: ts=2026-03-10T11:33:30.040Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:30 vm05 bash[22470]: cephadm 2026-03-10T11:33:29.513392+0000 mgr.x (mgr.24733) 43 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:30 vm05 bash[22470]: audit 2026-03-10T11:33:29.546431+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]': finished 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:30 vm05 bash[22470]: cluster 2026-03-10T11:33:29.551754+0000 mon.a (mon.0) 822 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:30 vm05 bash[22470]: audit 2026-03-10T11:33:29.779651+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.105:0/3058471627' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4225849513"}]: dispatch 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:30 vm05 bash[22470]: cluster 2026-03-10T11:33:29.956030+0000 mgr.x (mgr.24733) 44 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[17453]: cephadm 2026-03-10T11:33:29.513392+0000 mgr.x (mgr.24733) 43 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[17453]: audit 2026-03-10T11:33:29.546431+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]': finished 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[17453]: cluster 2026-03-10T11:33:29.551754+0000 mon.a (mon.0) 822 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[17453]: audit 2026-03-10T11:33:29.779651+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.105:0/3058471627' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4225849513"}]: dispatch 2026-03-10T11:33:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:30 vm05 bash[17453]: cluster 2026-03-10T11:33:29.956030+0000 mgr.x (mgr.24733) 44 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:33:30.921 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:30 vm07 bash[17804]: cephadm 2026-03-10T11:33:29.513392+0000 mgr.x (mgr.24733) 43 : cephadm [INF] Deploying daemon prometheus.a on vm07 2026-03-10T11:33:30.921 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:30 vm07 bash[17804]: audit 2026-03-10T11:33:29.546431+0000 mon.a (mon.0) 821 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2576592578"}]': finished 2026-03-10T11:33:30.921 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:30 vm07 bash[17804]: cluster 2026-03-10T11:33:29.551754+0000 mon.a (mon.0) 822 : cluster [DBG] osdmap e83: 8 total, 8 up, 8 in 2026-03-10T11:33:30.921 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:30 vm07 bash[17804]: audit 2026-03-10T11:33:29.779651+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.105:0/3058471627' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4225849513"}]: dispatch 2026-03-10T11:33:30.921 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:30 vm07 bash[17804]: cluster 2026-03-10T11:33:29.956030+0000 mgr.x (mgr.24733) 44 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 19 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:33:31.196 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:30 vm07 bash[38182]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-10T11:33:31.553 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 2abcce694348: Pulling fs layer 2026-03-10T11:33:31.553 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 455fd88e5221: Pulling fs layer 2026-03-10T11:33:31.553 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 324153f2810a: Pulling fs layer 2026-03-10T11:33:31.819 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:31 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:31] "GET /metrics HTTP/1.1" 200 37529 "" "Prometheus/2.33.4" 2026-03-10T11:33:31.819 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[17804]: audit 2026-03-10T11:33:30.553191+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.105:0/3058471627' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4225849513"}]': finished 2026-03-10T11:33:31.819 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[17804]: cluster 2026-03-10T11:33:30.553279+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T11:33:31.819 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[17804]: audit 2026-03-10T11:33:30.755292+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.105:0/1317184513' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/313876564"}]: dispatch 2026-03-10T11:33:31.820 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 455fd88e5221: Verifying Checksum 2026-03-10T11:33:31.820 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 455fd88e5221: Download complete 2026-03-10T11:33:31.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:31 vm05 bash[22470]: audit 2026-03-10T11:33:30.553191+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.105:0/3058471627' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4225849513"}]': finished 2026-03-10T11:33:31.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:31 vm05 bash[22470]: cluster 2026-03-10T11:33:30.553279+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T11:33:31.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:31 vm05 bash[22470]: audit 2026-03-10T11:33:30.755292+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.105:0/1317184513' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/313876564"}]: dispatch 2026-03-10T11:33:31.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:31 vm05 bash[17453]: audit 2026-03-10T11:33:30.553191+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.105:0/3058471627' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4225849513"}]': finished 2026-03-10T11:33:31.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:31 vm05 bash[17453]: cluster 2026-03-10T11:33:30.553279+0000 mon.a (mon.0) 825 : cluster [DBG] osdmap e84: 8 total, 8 up, 8 in 2026-03-10T11:33:31.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:31 vm05 bash[17453]: audit 2026-03-10T11:33:30.755292+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.105:0/1317184513' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/313876564"}]: dispatch 2026-03-10T11:33:32.096 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 2abcce694348: Verifying Checksum 2026-03-10T11:33:32.096 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 2abcce694348: Download complete 2026-03-10T11:33:32.096 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 2abcce694348: Pull complete 2026-03-10T11:33:32.096 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 324153f2810a: Verifying Checksum 2026-03-10T11:33:32.096 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:31 vm07 bash[38182]: 324153f2810a: Download complete 2026-03-10T11:33:32.096 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: 455fd88e5221: Pull complete 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: 324153f2810a: Pull complete 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.256Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.256Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.257Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.257Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.257Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=arp 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.258Z caller=node_exporter.go:117 level=info collector=edac 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.259Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-10T11:33:32.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=os 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.260Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=stat 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.261Z caller=node_exporter.go:117 level=info collector=time 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=node_exporter.go:117 level=info collector=uname 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-10T11:33:32.447 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[38182]: ts=2026-03-10T11:33:32.262Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-10T11:33:32.447 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:32 vm07 bash[34037]: ts=2026-03-10T11:33:32.314Z caller=manager.go:609 level=warn component="rule manager" group=pools msg="Evaluating rule failed" rule="alert: CephPoolGrowthWarning\nexpr: (predict_linear(ceph_pool_percent_used[2d], 3600 * 24 * 5) * on(pool_id) group_right()\n ceph_pool_metadata) >= 95\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.9.2\n severity: warning\n type: ceph_default\nannotations:\n description: |\n Pool '{{ $labels.name }}' will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.\n summary: Pool growth rate may soon exceed it's capacity\n" err="found duplicate series for the match group {pool_id=\"1\"} on the left hand-side of the operation: [{instance=\"192.168.123.107:9283\", job=\"ceph\", pool_id=\"1\"}, {instance=\"192.168.123.105:9283\", job=\"ceph\", pool_id=\"1\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:32 vm05 bash[17453]: audit 2026-03-10T11:33:31.562671+0000 mon.a (mon.0) 827 : audit [INF] from='client.? 192.168.123.105:0/1317184513' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/313876564"}]': finished 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:32 vm05 bash[17453]: cluster 2026-03-10T11:33:31.562748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:32 vm05 bash[17453]: audit 2026-03-10T11:33:31.773291+0000 mon.c (mon.1) 45 : audit [INF] from='client.? 192.168.123.105:0/222174725' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]: dispatch 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:32 vm05 bash[17453]: audit 2026-03-10T11:33:31.774063+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]: dispatch 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:32 vm05 bash[17453]: cluster 2026-03-10T11:33:31.956380+0000 mgr.x (mgr.24733) 45 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:32 vm05 bash[22470]: audit 2026-03-10T11:33:31.562671+0000 mon.a (mon.0) 827 : audit [INF] from='client.? 192.168.123.105:0/1317184513' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/313876564"}]': finished 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:32 vm05 bash[22470]: cluster 2026-03-10T11:33:31.562748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:32 vm05 bash[22470]: audit 2026-03-10T11:33:31.773291+0000 mon.c (mon.1) 45 : audit [INF] from='client.? 192.168.123.105:0/222174725' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]: dispatch 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:32 vm05 bash[22470]: audit 2026-03-10T11:33:31.774063+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]: dispatch 2026-03-10T11:33:32.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:32 vm05 bash[22470]: cluster 2026-03-10T11:33:31.956380+0000 mgr.x (mgr.24733) 45 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T11:33:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[17804]: audit 2026-03-10T11:33:31.562671+0000 mon.a (mon.0) 827 : audit [INF] from='client.? 192.168.123.105:0/1317184513' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/313876564"}]': finished 2026-03-10T11:33:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[17804]: cluster 2026-03-10T11:33:31.562748+0000 mon.a (mon.0) 828 : cluster [DBG] osdmap e85: 8 total, 8 up, 8 in 2026-03-10T11:33:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[17804]: audit 2026-03-10T11:33:31.773291+0000 mon.c (mon.1) 45 : audit [INF] from='client.? 192.168.123.105:0/222174725' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]: dispatch 2026-03-10T11:33:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[17804]: audit 2026-03-10T11:33:31.774063+0000 mon.a (mon.0) 829 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]: dispatch 2026-03-10T11:33:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:32 vm07 bash[17804]: cluster 2026-03-10T11:33:31.956380+0000 mgr.x (mgr.24733) 45 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1 op/s 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:33 vm05 bash[22470]: audit 2026-03-10T11:33:32.584099+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]': finished 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:33 vm05 bash[22470]: cluster 2026-03-10T11:33:32.584190+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:33 vm05 bash[22470]: audit 2026-03-10T11:33:32.783598+0000 mon.b (mon.2) 155 : audit [INF] from='client.? 192.168.123.105:0/1581958892' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]: dispatch 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:33 vm05 bash[22470]: audit 2026-03-10T11:33:32.783784+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]: dispatch 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:33 vm05 bash[17453]: audit 2026-03-10T11:33:32.584099+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]': finished 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:33 vm05 bash[17453]: cluster 2026-03-10T11:33:32.584190+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:33 vm05 bash[17453]: audit 2026-03-10T11:33:32.783598+0000 mon.b (mon.2) 155 : audit [INF] from='client.? 192.168.123.105:0/1581958892' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]: dispatch 2026-03-10T11:33:33.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:33 vm05 bash[17453]: audit 2026-03-10T11:33:32.783784+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]: dispatch 2026-03-10T11:33:33.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:33 vm07 bash[17804]: audit 2026-03-10T11:33:32.584099+0000 mon.a (mon.0) 830 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1703505188"}]': finished 2026-03-10T11:33:33.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:33 vm07 bash[17804]: cluster 2026-03-10T11:33:32.584190+0000 mon.a (mon.0) 831 : cluster [DBG] osdmap e86: 8 total, 8 up, 8 in 2026-03-10T11:33:33.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:33 vm07 bash[17804]: audit 2026-03-10T11:33:32.783598+0000 mon.b (mon.2) 155 : audit [INF] from='client.? 192.168.123.105:0/1581958892' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]: dispatch 2026-03-10T11:33:33.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:33 vm07 bash[17804]: audit 2026-03-10T11:33:32.783784+0000 mon.a (mon.0) 832 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]: dispatch 2026-03-10T11:33:34.344 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:33:33 vm05 bash[17722]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:33] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.33.4" 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:34 vm05 bash[22470]: audit 2026-03-10T11:33:33.770237+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]': finished 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:34 vm05 bash[22470]: cluster 2026-03-10T11:33:33.770326+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:34 vm05 bash[22470]: cluster 2026-03-10T11:33:33.956657+0000 mgr.x (mgr.24733) 46 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:34 vm05 bash[22470]: audit 2026-03-10T11:33:33.968000+0000 mon.c (mon.1) 46 : audit [INF] from='client.? 192.168.123.105:0/1593429845' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]: dispatch 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:34 vm05 bash[22470]: audit 2026-03-10T11:33:33.968400+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]: dispatch 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:34 vm05 bash[17453]: audit 2026-03-10T11:33:33.770237+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]': finished 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:34 vm05 bash[17453]: cluster 2026-03-10T11:33:33.770326+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:34 vm05 bash[17453]: cluster 2026-03-10T11:33:33.956657+0000 mgr.x (mgr.24733) 46 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:34 vm05 bash[17453]: audit 2026-03-10T11:33:33.968000+0000 mon.c (mon.1) 46 : audit [INF] from='client.? 192.168.123.105:0/1593429845' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]: dispatch 2026-03-10T11:33:35.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:34 vm05 bash[17453]: audit 2026-03-10T11:33:33.968400+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]: dispatch 2026-03-10T11:33:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:34 vm07 bash[17804]: audit 2026-03-10T11:33:33.770237+0000 mon.a (mon.0) 833 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/1590912030"}]': finished 2026-03-10T11:33:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:34 vm07 bash[17804]: cluster 2026-03-10T11:33:33.770326+0000 mon.a (mon.0) 834 : cluster [DBG] osdmap e87: 8 total, 8 up, 8 in 2026-03-10T11:33:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:34 vm07 bash[17804]: cluster 2026-03-10T11:33:33.956657+0000 mgr.x (mgr.24733) 46 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 71 MiB used, 160 GiB / 160 GiB avail; 1.5 KiB/s rd, 1 op/s 2026-03-10T11:33:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:34 vm07 bash[17804]: audit 2026-03-10T11:33:33.968000+0000 mon.c (mon.1) 46 : audit [INF] from='client.? 192.168.123.105:0/1593429845' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]: dispatch 2026-03-10T11:33:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:34 vm07 bash[17804]: audit 2026-03-10T11:33:33.968400+0000 mon.a (mon.0) 835 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]: dispatch 2026-03-10T11:33:36.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:35 vm07 bash[17804]: audit 2026-03-10T11:33:34.787463+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]': finished 2026-03-10T11:33:36.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:35 vm07 bash[17804]: cluster 2026-03-10T11:33:34.787623+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T11:33:36.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:35 vm07 bash[17804]: audit 2026-03-10T11:33:34.999882+0000 mon.b (mon.2) 156 : audit [INF] from='client.? 192.168.123.105:0/2443767126' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]: dispatch 2026-03-10T11:33:36.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:35 vm07 bash[17804]: audit 2026-03-10T11:33:35.000058+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]: dispatch 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:35 vm05 bash[22470]: audit 2026-03-10T11:33:34.787463+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]': finished 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:35 vm05 bash[22470]: cluster 2026-03-10T11:33:34.787623+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:35 vm05 bash[22470]: audit 2026-03-10T11:33:34.999882+0000 mon.b (mon.2) 156 : audit [INF] from='client.? 192.168.123.105:0/2443767126' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]: dispatch 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:35 vm05 bash[22470]: audit 2026-03-10T11:33:35.000058+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]: dispatch 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:35 vm05 bash[17453]: audit 2026-03-10T11:33:34.787463+0000 mon.a (mon.0) 836 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2551822194"}]': finished 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:35 vm05 bash[17453]: cluster 2026-03-10T11:33:34.787623+0000 mon.a (mon.0) 837 : cluster [DBG] osdmap e88: 8 total, 8 up, 8 in 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:35 vm05 bash[17453]: audit 2026-03-10T11:33:34.999882+0000 mon.b (mon.2) 156 : audit [INF] from='client.? 192.168.123.105:0/2443767126' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]: dispatch 2026-03-10T11:33:36.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:35 vm05 bash[17453]: audit 2026-03-10T11:33:35.000058+0000 mon.a (mon.0) 838 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]: dispatch 2026-03-10T11:33:36.551 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.551 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.551 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: Stopping Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=main.go:775 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=main.go:798 level=info msg="Stopping scrape discovery manager..." 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=main.go:812 level=info msg="Stopping notify discovery manager..." 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=main.go:834 level=info msg="Stopping scrape manager..." 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=main.go:794 level=info msg="Scrape discovery manager stopped" 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=main.go:808 level=info msg="Notify discovery manager stopped" 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=manager.go:945 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.363Z caller=manager.go:955 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.364Z caller=main.go:828 level=info msg="Scrape manager stopped" 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.365Z caller=notifier.go:600 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.365Z caller=main.go:1054 level=info msg="Notifier manager stopped" 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[34037]: ts=2026-03-10T11:33:36.365Z caller=main.go:1066 level=info msg="See you next time!" 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38519]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus-a 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service: Deactivated successfully. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: Stopped Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.552 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.811 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.811 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.811 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STOPPING 2026-03-10T11:33:36.812 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: Started Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.795Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.795Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.796Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.796Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.796Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.797Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.797Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.798Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.798Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.800Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.800Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.563µs 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.800Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T11:33:36.812 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.812Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=2 2026-03-10T11:33:36.812 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:33:36 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:33:36.844 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[50896]: ts=2026-03-10T11:33:36.569Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004142183s 2026-03-10T11:33:37.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STOPPED 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STARTING 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Serving on http://:::9283 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STARTED 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STOPPING 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STOPPED 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STARTING 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Serving on http://:::9283 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STARTED 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:36 vm07 bash[36672]: [10/Mar/2026:11:33:36] ENGINE Bus STOPPING 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:35.859517+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]': finished 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: cluster 2026-03-10T11:33:35.859538+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: cluster 2026-03-10T11:33:35.956943+0000 mgr.x (mgr.24733) 47 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.677602+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.687203+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.695375+0000 mon.b (mon.2) 157 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.696770+0000 mon.b (mon.2) 158 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.700442+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.715940+0000 mon.b (mon.2) 159 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.723435+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.726987+0000 mon.b (mon.2) 160 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.728760+0000 mon.b (mon.2) 161 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.732359+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.740684+0000 mon.b (mon.2) 162 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.742384+0000 mon.b (mon.2) 163 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.746347+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.755630+0000 mon.b (mon.2) 164 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.756743+0000 mon.b (mon.2) 165 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.764052+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:36 vm07 bash[17804]: audit 2026-03-10T11:33:36.810555+0000 mon.b (mon.2) 166 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.835Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=2 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.836Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=2 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.836Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=21.32µs wal_replay_duration=35.413561ms wbl_replay_duration=261ns total_replay_duration=35.450349ms 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.837Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.837Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.837Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.860Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=22.235926ms db_storage=882ns remote_storage=1.211µs web_handler=561ns query_engine=991ns scrape=796.354µs scrape_sd=158.617µs notify=11.431µs notify_sd=7.504µs rules=20.661415ms tracing=6.363µs 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.860Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T11:33:37.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:36 vm07 bash[38631]: ts=2026-03-10T11:33:36.860Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:35.859517+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]': finished 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: cluster 2026-03-10T11:33:35.859538+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: cluster 2026-03-10T11:33:35.956943+0000 mgr.x (mgr.24733) 47 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.677602+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.687203+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.695375+0000 mon.b (mon.2) 157 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.696770+0000 mon.b (mon.2) 158 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.700442+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.715940+0000 mon.b (mon.2) 159 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.723435+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.726987+0000 mon.b (mon.2) 160 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.728760+0000 mon.b (mon.2) 161 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.732359+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.740684+0000 mon.b (mon.2) 162 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.742384+0000 mon.b (mon.2) 163 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.746347+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.755630+0000 mon.b (mon.2) 164 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.756743+0000 mon.b (mon.2) 165 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.764052+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:36 vm05 bash[17453]: audit 2026-03-10T11:33:36.810555+0000 mon.b (mon.2) 166 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:35.859517+0000 mon.a (mon.0) 839 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/1590912030"}]': finished 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: cluster 2026-03-10T11:33:35.859538+0000 mon.a (mon.0) 840 : cluster [DBG] osdmap e89: 8 total, 8 up, 8 in 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: cluster 2026-03-10T11:33:35.956943+0000 mgr.x (mgr.24733) 47 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.677602+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.687203+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.695375+0000 mon.b (mon.2) 157 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.696770+0000 mon.b (mon.2) 158 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.700442+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.715940+0000 mon.b (mon.2) 159 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.723435+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.726987+0000 mon.b (mon.2) 160 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.728760+0000 mon.b (mon.2) 161 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.732359+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.740684+0000 mon.b (mon.2) 162 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.742384+0000 mon.b (mon.2) 163 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.746347+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.755630+0000 mon.b (mon.2) 164 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:33:37.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.756743+0000 mon.b (mon.2) 165 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T11:33:37.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.764052+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:37.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:36 vm05 bash[22470]: audit 2026-03-10T11:33:36.810555+0000 mon.b (mon.2) 166 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:33:37.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:37 vm07 bash[36672]: [10/Mar/2026:11:33:37] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:33:37.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:37 vm07 bash[36672]: [10/Mar/2026:11:33:37] ENGINE Bus STOPPED 2026-03-10T11:33:37.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:37 vm07 bash[36672]: [10/Mar/2026:11:33:37] ENGINE Bus STARTING 2026-03-10T11:33:37.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:37 vm07 bash[36672]: [10/Mar/2026:11:33:37] ENGINE Serving on http://:::9283 2026-03-10T11:33:37.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:37 vm07 bash[36672]: [10/Mar/2026:11:33:37] ENGINE Bus STARTED 2026-03-10T11:33:38.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.695689+0000 mgr.x (mgr.24733) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.696962+0000 mgr.x (mgr.24733) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.716660+0000 mgr.x (mgr.24733) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: cephadm 2026-03-10T11:33:36.726732+0000 mgr.x (mgr.24733) 51 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.727454+0000 mgr.x (mgr.24733) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.729240+0000 mgr.x (mgr.24733) 53 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.741007+0000 mgr.x (mgr.24733) 54 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.742643+0000 mgr.x (mgr.24733) 55 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.756006+0000 mgr.x (mgr.24733) 56 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:36.757105+0000 mgr.x (mgr.24733) 57 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:37.125253+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:38 vm07 bash[17804]: audit 2026-03-10T11:33:37.131405+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.695689+0000 mgr.x (mgr.24733) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.696962+0000 mgr.x (mgr.24733) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.716660+0000 mgr.x (mgr.24733) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: cephadm 2026-03-10T11:33:36.726732+0000 mgr.x (mgr.24733) 51 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.727454+0000 mgr.x (mgr.24733) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.729240+0000 mgr.x (mgr.24733) 53 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.741007+0000 mgr.x (mgr.24733) 54 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.742643+0000 mgr.x (mgr.24733) 55 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.756006+0000 mgr.x (mgr.24733) 56 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:36.757105+0000 mgr.x (mgr.24733) 57 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:37.125253+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:38 vm05 bash[22470]: audit 2026-03-10T11:33:37.131405+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.695689+0000 mgr.x (mgr.24733) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.696962+0000 mgr.x (mgr.24733) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm05.local:9093"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.716660+0000 mgr.x (mgr.24733) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: cephadm 2026-03-10T11:33:36.726732+0000 mgr.x (mgr.24733) 51 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.727454+0000 mgr.x (mgr.24733) 52 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.729240+0000 mgr.x (mgr.24733) 53 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.741007+0000 mgr.x (mgr.24733) 54 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.742643+0000 mgr.x (mgr.24733) 55 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm07.local:3000"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.756006+0000 mgr.x (mgr.24733) 56 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:36.757105+0000 mgr.x (mgr.24733) 57 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm07.local:9095"}]: dispatch 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:37.125253+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:38.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:38 vm05 bash[17453]: audit 2026-03-10T11:33:37.131405+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:39.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:39 vm07 bash[17804]: cluster 2026-03-10T11:33:37.957259+0000 mgr.x (mgr.24733) 58 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-10T11:33:39.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:39 vm07 bash[17804]: audit 2026-03-10T11:33:38.203726+0000 mgr.x (mgr.24733) 59 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:39.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:39 vm05 bash[22470]: cluster 2026-03-10T11:33:37.957259+0000 mgr.x (mgr.24733) 58 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-10T11:33:39.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:39 vm05 bash[22470]: audit 2026-03-10T11:33:38.203726+0000 mgr.x (mgr.24733) 59 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:39.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:39 vm05 bash[17453]: cluster 2026-03-10T11:33:37.957259+0000 mgr.x (mgr.24733) 58 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 950 B/s rd, 0 op/s 2026-03-10T11:33:39.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:39 vm05 bash[17453]: audit 2026-03-10T11:33:38.203726+0000 mgr.x (mgr.24733) 59 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:41.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:41 vm07 bash[17804]: cluster 2026-03-10T11:33:39.957668+0000 mgr.x (mgr.24733) 60 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 827 B/s rd, 0 op/s 2026-03-10T11:33:41.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:41 vm05 bash[22470]: cluster 2026-03-10T11:33:39.957668+0000 mgr.x (mgr.24733) 60 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 827 B/s rd, 0 op/s 2026-03-10T11:33:41.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:41 vm05 bash[17453]: cluster 2026-03-10T11:33:39.957668+0000 mgr.x (mgr.24733) 60 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 827 B/s rd, 0 op/s 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: cluster 2026-03-10T11:33:41.958201+0000 mgr.x (mgr.24733) 61 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.128098+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.136147+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.450012+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.456397+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.457598+0000 mon.b (mon.2) 167 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.458082+0000 mon.b (mon.2) 168 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:33:43.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:43 vm07 bash[17804]: audit 2026-03-10T11:33:42.461984+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: cluster 2026-03-10T11:33:41.958201+0000 mgr.x (mgr.24733) 61 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.128098+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.136147+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.450012+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.456397+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.457598+0000 mon.b (mon.2) 167 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.458082+0000 mon.b (mon.2) 168 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:43 vm05 bash[22470]: audit 2026-03-10T11:33:42.461984+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: cluster 2026-03-10T11:33:41.958201+0000 mgr.x (mgr.24733) 61 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.128098+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.136147+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.450012+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.456397+0000 mon.a (mon.0) 853 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.457598+0000 mon.b (mon.2) 167 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.458082+0000 mon.b (mon.2) 168 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:33:43.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:43 vm05 bash[17453]: audit 2026-03-10T11:33:42.461984+0000 mon.a (mon.0) 854 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:33:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:44] "GET /metrics HTTP/1.1" 200 37526 "" "Prometheus/2.51.0" 2026-03-10T11:33:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:45 vm07 bash[17804]: cluster 2026-03-10T11:33:43.958471+0000 mgr.x (mgr.24733) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:33:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:45 vm07 bash[17804]: audit 2026-03-10T11:33:44.152028+0000 mon.b (mon.2) 169 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:45.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:45 vm05 bash[22470]: cluster 2026-03-10T11:33:43.958471+0000 mgr.x (mgr.24733) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:33:45.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:45 vm05 bash[22470]: audit 2026-03-10T11:33:44.152028+0000 mon.b (mon.2) 169 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:45.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:45 vm05 bash[17453]: cluster 2026-03-10T11:33:43.958471+0000 mgr.x (mgr.24733) 62 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:33:45.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:45 vm05 bash[17453]: audit 2026-03-10T11:33:44.152028+0000 mon.b (mon.2) 169 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:33:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:47 vm07 bash[17804]: cluster 2026-03-10T11:33:45.958941+0000 mgr.x (mgr.24733) 63 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1013 B/s rd, 0 op/s 2026-03-10T11:33:47.446 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:46 vm07 bash[38631]: ts=2026-03-10T11:33:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:33:47.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:47 vm05 bash[22470]: cluster 2026-03-10T11:33:45.958941+0000 mgr.x (mgr.24733) 63 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1013 B/s rd, 0 op/s 2026-03-10T11:33:47.594 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:47 vm05 bash[17453]: cluster 2026-03-10T11:33:45.958941+0000 mgr.x (mgr.24733) 63 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1013 B/s rd, 0 op/s 2026-03-10T11:33:49.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:48 vm05 bash[22470]: cluster 2026-03-10T11:33:47.959217+0000 mgr.x (mgr.24733) 64 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:49.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:48 vm05 bash[22470]: audit 2026-03-10T11:33:48.211723+0000 mgr.x (mgr.24733) 65 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:49.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:48 vm05 bash[17453]: cluster 2026-03-10T11:33:47.959217+0000 mgr.x (mgr.24733) 64 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:49.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:48 vm05 bash[17453]: audit 2026-03-10T11:33:48.211723+0000 mgr.x (mgr.24733) 65 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:48 vm07 bash[17804]: cluster 2026-03-10T11:33:47.959217+0000 mgr.x (mgr.24733) 64 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:48 vm07 bash[17804]: audit 2026-03-10T11:33:48.211723+0000 mgr.x (mgr.24733) 65 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:51.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:51 vm05 bash[22470]: cluster 2026-03-10T11:33:49.959524+0000 mgr.x (mgr.24733) 66 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:51.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:51 vm05 bash[17453]: cluster 2026-03-10T11:33:49.959524+0000 mgr.x (mgr.24733) 66 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:51 vm07 bash[17804]: cluster 2026-03-10T11:33:49.959524+0000 mgr.x (mgr.24733) 66 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:53.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:53 vm05 bash[22470]: cluster 2026-03-10T11:33:51.960104+0000 mgr.x (mgr.24733) 67 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:53.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:53 vm05 bash[17453]: cluster 2026-03-10T11:33:51.960104+0000 mgr.x (mgr.24733) 67 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:53 vm07 bash[17804]: cluster 2026-03-10T11:33:51.960104+0000 mgr.x (mgr.24733) 67 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:54.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:54 vm07 bash[38631]: ts=2026-03-10T11:33:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:33:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:33:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:33:54] "GET /metrics HTTP/1.1" 200 37526 "" "Prometheus/2.51.0" 2026-03-10T11:33:55.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:55 vm05 bash[17453]: cluster 2026-03-10T11:33:53.960450+0000 mgr.x (mgr.24733) 68 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:55.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:55 vm05 bash[22470]: cluster 2026-03-10T11:33:53.960450+0000 mgr.x (mgr.24733) 68 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:55 vm07 bash[17804]: cluster 2026-03-10T11:33:53.960450+0000 mgr.x (mgr.24733) 68 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:57.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:57 vm05 bash[17453]: cluster 2026-03-10T11:33:55.960985+0000 mgr.x (mgr.24733) 69 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:57.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:57 vm05 bash[22470]: cluster 2026-03-10T11:33:55.960985+0000 mgr.x (mgr.24733) 69 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:57 vm07 bash[17804]: cluster 2026-03-10T11:33:55.960985+0000 mgr.x (mgr.24733) 69 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:33:57.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:33:56 vm07 bash[38631]: ts=2026-03-10T11:33:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:33:59.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:58 vm05 bash[22470]: cluster 2026-03-10T11:33:57.961281+0000 mgr.x (mgr.24733) 70 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:59.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:58 vm05 bash[22470]: audit 2026-03-10T11:33:58.218978+0000 mgr.x (mgr.24733) 71 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:59.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:58 vm05 bash[17453]: cluster 2026-03-10T11:33:57.961281+0000 mgr.x (mgr.24733) 70 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:59.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:58 vm05 bash[17453]: audit 2026-03-10T11:33:58.218978+0000 mgr.x (mgr.24733) 71 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:33:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:58 vm07 bash[17804]: cluster 2026-03-10T11:33:57.961281+0000 mgr.x (mgr.24733) 70 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:33:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:58 vm07 bash[17804]: audit 2026-03-10T11:33:58.218978+0000 mgr.x (mgr.24733) 71 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:00.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:33:59 vm05 bash[22470]: audit 2026-03-10T11:33:59.152384+0000 mon.b (mon.2) 170 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:00.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:33:59 vm05 bash[17453]: audit 2026-03-10T11:33:59.152384+0000 mon.b (mon.2) 170 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:33:59 vm07 bash[17804]: audit 2026-03-10T11:33:59.152384+0000 mon.b (mon.2) 170 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:01.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:00 vm05 bash[22470]: cluster 2026-03-10T11:33:59.961613+0000 mgr.x (mgr.24733) 72 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:00 vm05 bash[17453]: cluster 2026-03-10T11:33:59.961613+0000 mgr.x (mgr.24733) 72 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:01.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:00 vm07 bash[17804]: cluster 2026-03-10T11:33:59.961613+0000 mgr.x (mgr.24733) 72 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:03.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:03 vm05 bash[22470]: cluster 2026-03-10T11:34:01.962107+0000 mgr.x (mgr.24733) 73 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:03.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:03 vm05 bash[17453]: cluster 2026-03-10T11:34:01.962107+0000 mgr.x (mgr.24733) 73 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:03 vm07 bash[17804]: cluster 2026-03-10T11:34:01.962107+0000 mgr.x (mgr.24733) 73 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:04.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:04 vm07 bash[38631]: ts=2026-03-10T11:34:04.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:34:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:34:04] "GET /metrics HTTP/1.1" 200 37522 "" "Prometheus/2.51.0" 2026-03-10T11:34:05.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:05 vm05 bash[17453]: cluster 2026-03-10T11:34:03.962374+0000 mgr.x (mgr.24733) 74 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:05.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:05 vm05 bash[22470]: cluster 2026-03-10T11:34:03.962374+0000 mgr.x (mgr.24733) 74 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:05 vm07 bash[17804]: cluster 2026-03-10T11:34:03.962374+0000 mgr.x (mgr.24733) 74 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:07.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:07 vm05 bash[22470]: cluster 2026-03-10T11:34:05.963006+0000 mgr.x (mgr.24733) 75 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:07.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:07 vm05 bash[17453]: cluster 2026-03-10T11:34:05.963006+0000 mgr.x (mgr.24733) 75 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:07 vm07 bash[17804]: cluster 2026-03-10T11:34:05.963006+0000 mgr.x (mgr.24733) 75 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:07.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:06 vm07 bash[38631]: ts=2026-03-10T11:34:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:09.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:08 vm05 bash[22470]: cluster 2026-03-10T11:34:07.963337+0000 mgr.x (mgr.24733) 76 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:09.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:08 vm05 bash[22470]: audit 2026-03-10T11:34:08.229494+0000 mgr.x (mgr.24733) 77 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:09.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:08 vm05 bash[17453]: cluster 2026-03-10T11:34:07.963337+0000 mgr.x (mgr.24733) 76 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:09.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:08 vm05 bash[17453]: audit 2026-03-10T11:34:08.229494+0000 mgr.x (mgr.24733) 77 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:08 vm07 bash[17804]: cluster 2026-03-10T11:34:07.963337+0000 mgr.x (mgr.24733) 76 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:08 vm07 bash[17804]: audit 2026-03-10T11:34:08.229494+0000 mgr.x (mgr.24733) 77 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:11.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:11 vm05 bash[22470]: cluster 2026-03-10T11:34:09.963645+0000 mgr.x (mgr.24733) 78 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:11.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:11 vm05 bash[17453]: cluster 2026-03-10T11:34:09.963645+0000 mgr.x (mgr.24733) 78 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:11 vm07 bash[17804]: cluster 2026-03-10T11:34:09.963645+0000 mgr.x (mgr.24733) 78 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:13.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:13 vm05 bash[22470]: cluster 2026-03-10T11:34:11.964138+0000 mgr.x (mgr.24733) 79 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:13.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:13 vm05 bash[17453]: cluster 2026-03-10T11:34:11.964138+0000 mgr.x (mgr.24733) 79 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:13 vm07 bash[17804]: cluster 2026-03-10T11:34:11.964138+0000 mgr.x (mgr.24733) 79 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:14.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:14 vm07 bash[38631]: ts=2026-03-10T11:34:14.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:34:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:34:14] "GET /metrics HTTP/1.1" 200 37522 "" "Prometheus/2.51.0" 2026-03-10T11:34:15.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:15 vm05 bash[22470]: cluster 2026-03-10T11:34:13.964487+0000 mgr.x (mgr.24733) 80 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:15.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:15 vm05 bash[22470]: audit 2026-03-10T11:34:14.152480+0000 mon.b (mon.2) 171 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:15.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:15 vm05 bash[17453]: cluster 2026-03-10T11:34:13.964487+0000 mgr.x (mgr.24733) 80 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:15.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:15 vm05 bash[17453]: audit 2026-03-10T11:34:14.152480+0000 mon.b (mon.2) 171 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:15 vm07 bash[17804]: cluster 2026-03-10T11:34:13.964487+0000 mgr.x (mgr.24733) 80 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:15 vm07 bash[17804]: audit 2026-03-10T11:34:14.152480+0000 mon.b (mon.2) 171 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:17.262 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:16 vm07 bash[38631]: ts=2026-03-10T11:34:16.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:17.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:17 vm05 bash[22470]: cluster 2026-03-10T11:34:15.965368+0000 mgr.x (mgr.24733) 81 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:17.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:17 vm05 bash[17453]: cluster 2026-03-10T11:34:15.965368+0000 mgr.x (mgr.24733) 81 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:17.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:17 vm07 bash[17804]: cluster 2026-03-10T11:34:15.965368+0000 mgr.x (mgr.24733) 81 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:19.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:18 vm05 bash[22470]: cluster 2026-03-10T11:34:17.965680+0000 mgr.x (mgr.24733) 82 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:19.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:18 vm05 bash[22470]: audit 2026-03-10T11:34:18.239163+0000 mgr.x (mgr.24733) 83 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:19.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:18 vm05 bash[17453]: cluster 2026-03-10T11:34:17.965680+0000 mgr.x (mgr.24733) 82 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:19.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:18 vm05 bash[17453]: audit 2026-03-10T11:34:18.239163+0000 mgr.x (mgr.24733) 83 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:18 vm07 bash[17804]: cluster 2026-03-10T11:34:17.965680+0000 mgr.x (mgr.24733) 82 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:18 vm07 bash[17804]: audit 2026-03-10T11:34:18.239163+0000 mgr.x (mgr.24733) 83 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:21.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:21 vm05 bash[22470]: cluster 2026-03-10T11:34:19.966001+0000 mgr.x (mgr.24733) 84 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:21.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:21 vm05 bash[17453]: cluster 2026-03-10T11:34:19.966001+0000 mgr.x (mgr.24733) 84 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:21 vm07 bash[17804]: cluster 2026-03-10T11:34:19.966001+0000 mgr.x (mgr.24733) 84 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:23.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:23 vm05 bash[22470]: cluster 2026-03-10T11:34:21.966461+0000 mgr.x (mgr.24733) 85 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:23.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:23 vm05 bash[17453]: cluster 2026-03-10T11:34:21.966461+0000 mgr.x (mgr.24733) 85 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:23 vm07 bash[17804]: cluster 2026-03-10T11:34:21.966461+0000 mgr.x (mgr.24733) 85 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:24.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:24 vm07 bash[38631]: ts=2026-03-10T11:34:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:24.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:34:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:34:24] "GET /metrics HTTP/1.1" 200 37522 "" "Prometheus/2.51.0" 2026-03-10T11:34:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:25 vm07 bash[17804]: cluster 2026-03-10T11:34:23.966738+0000 mgr.x (mgr.24733) 86 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:25.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:25 vm05 bash[22470]: cluster 2026-03-10T11:34:23.966738+0000 mgr.x (mgr.24733) 86 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:25.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:25 vm05 bash[17453]: cluster 2026-03-10T11:34:23.966738+0000 mgr.x (mgr.24733) 86 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:27 vm07 bash[17804]: cluster 2026-03-10T11:34:25.967239+0000 mgr.x (mgr.24733) 87 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:27.446 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:26 vm07 bash[38631]: ts=2026-03-10T11:34:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:27 vm05 bash[22470]: cluster 2026-03-10T11:34:25.967239+0000 mgr.x (mgr.24733) 87 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:27 vm05 bash[17453]: cluster 2026-03-10T11:34:25.967239+0000 mgr.x (mgr.24733) 87 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:29.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:28 vm05 bash[22470]: cluster 2026-03-10T11:34:27.967552+0000 mgr.x (mgr.24733) 88 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:29.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:28 vm05 bash[22470]: audit 2026-03-10T11:34:28.248332+0000 mgr.x (mgr.24733) 89 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:29.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:28 vm05 bash[17453]: cluster 2026-03-10T11:34:27.967552+0000 mgr.x (mgr.24733) 88 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:29.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:28 vm05 bash[17453]: audit 2026-03-10T11:34:28.248332+0000 mgr.x (mgr.24733) 89 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:28 vm07 bash[17804]: cluster 2026-03-10T11:34:27.967552+0000 mgr.x (mgr.24733) 88 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:28 vm07 bash[17804]: audit 2026-03-10T11:34:28.248332+0000 mgr.x (mgr.24733) 89 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:29 vm05 bash[22470]: audit 2026-03-10T11:34:29.152569+0000 mon.b (mon.2) 172 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:30.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:29 vm05 bash[17453]: audit 2026-03-10T11:34:29.152569+0000 mon.b (mon.2) 172 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:30.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:29 vm07 bash[17804]: audit 2026-03-10T11:34:29.152569+0000 mon.b (mon.2) 172 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:31.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:30 vm05 bash[22470]: cluster 2026-03-10T11:34:29.967970+0000 mgr.x (mgr.24733) 90 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:31.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:30 vm05 bash[17453]: cluster 2026-03-10T11:34:29.967970+0000 mgr.x (mgr.24733) 90 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:30 vm07 bash[17804]: cluster 2026-03-10T11:34:29.967970+0000 mgr.x (mgr.24733) 90 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:33.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:33 vm05 bash[22470]: cluster 2026-03-10T11:34:31.968558+0000 mgr.x (mgr.24733) 91 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:33 vm05 bash[17453]: cluster 2026-03-10T11:34:31.968558+0000 mgr.x (mgr.24733) 91 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:33 vm07 bash[17804]: cluster 2026-03-10T11:34:31.968558+0000 mgr.x (mgr.24733) 91 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:34.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:34 vm07 bash[38631]: ts=2026-03-10T11:34:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:34:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:34:34] "GET /metrics HTTP/1.1" 200 37522 "" "Prometheus/2.51.0" 2026-03-10T11:34:35.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:35 vm05 bash[22470]: cluster 2026-03-10T11:34:33.968885+0000 mgr.x (mgr.24733) 92 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:35.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:35 vm05 bash[17453]: cluster 2026-03-10T11:34:33.968885+0000 mgr.x (mgr.24733) 92 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:35 vm07 bash[17804]: cluster 2026-03-10T11:34:33.968885+0000 mgr.x (mgr.24733) 92 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:37.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:37 vm05 bash[22470]: cluster 2026-03-10T11:34:35.969514+0000 mgr.x (mgr.24733) 93 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:37 vm05 bash[17453]: cluster 2026-03-10T11:34:35.969514+0000 mgr.x (mgr.24733) 93 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:37 vm07 bash[17804]: cluster 2026-03-10T11:34:35.969514+0000 mgr.x (mgr.24733) 93 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:37.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:36 vm07 bash[38631]: ts=2026-03-10T11:34:36.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:39.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:38 vm05 bash[22470]: cluster 2026-03-10T11:34:37.969915+0000 mgr.x (mgr.24733) 94 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:39.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:38 vm05 bash[22470]: audit 2026-03-10T11:34:38.249540+0000 mgr.x (mgr.24733) 95 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:38 vm05 bash[17453]: cluster 2026-03-10T11:34:37.969915+0000 mgr.x (mgr.24733) 94 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:38 vm05 bash[17453]: audit 2026-03-10T11:34:38.249540+0000 mgr.x (mgr.24733) 95 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:38 vm07 bash[17804]: cluster 2026-03-10T11:34:37.969915+0000 mgr.x (mgr.24733) 94 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:38 vm07 bash[17804]: audit 2026-03-10T11:34:38.249540+0000 mgr.x (mgr.24733) 95 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:41.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:41 vm05 bash[22470]: cluster 2026-03-10T11:34:39.970300+0000 mgr.x (mgr.24733) 96 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:41.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:41 vm05 bash[17453]: cluster 2026-03-10T11:34:39.970300+0000 mgr.x (mgr.24733) 96 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:41.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:41 vm07 bash[17804]: cluster 2026-03-10T11:34:39.970300+0000 mgr.x (mgr.24733) 96 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:43 vm05 bash[22470]: cluster 2026-03-10T11:34:41.970801+0000 mgr.x (mgr.24733) 97 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:43 vm05 bash[22470]: audit 2026-03-10T11:34:42.501800+0000 mon.b (mon.2) 173 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:43 vm05 bash[22470]: audit 2026-03-10T11:34:42.768513+0000 mon.a (mon.0) 855 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:43 vm05 bash[22470]: audit 2026-03-10T11:34:42.774740+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:43 vm05 bash[17453]: cluster 2026-03-10T11:34:41.970801+0000 mgr.x (mgr.24733) 97 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:43 vm05 bash[17453]: audit 2026-03-10T11:34:42.501800+0000 mon.b (mon.2) 173 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:43 vm05 bash[17453]: audit 2026-03-10T11:34:42.768513+0000 mon.a (mon.0) 855 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:43.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:43 vm05 bash[17453]: audit 2026-03-10T11:34:42.774740+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:43 vm07 bash[17804]: cluster 2026-03-10T11:34:41.970801+0000 mgr.x (mgr.24733) 97 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:43 vm07 bash[17804]: audit 2026-03-10T11:34:42.501800+0000 mon.b (mon.2) 173 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:34:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:43 vm07 bash[17804]: audit 2026-03-10T11:34:42.768513+0000 mon.a (mon.0) 855 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:43 vm07 bash[17804]: audit 2026-03-10T11:34:42.774740+0000 mon.a (mon.0) 856 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:44.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:44 vm07 bash[38631]: ts=2026-03-10T11:34:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:34:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:34:44] "GET /metrics HTTP/1.1" 200 37515 "" "Prometheus/2.51.0" 2026-03-10T11:34:45.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:45 vm05 bash[17453]: cluster 2026-03-10T11:34:43.971097+0000 mgr.x (mgr.24733) 98 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:45.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:45 vm05 bash[17453]: audit 2026-03-10T11:34:44.152730+0000 mon.b (mon.2) 174 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:45.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:45 vm05 bash[22470]: cluster 2026-03-10T11:34:43.971097+0000 mgr.x (mgr.24733) 98 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:45.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:45 vm05 bash[22470]: audit 2026-03-10T11:34:44.152730+0000 mon.b (mon.2) 174 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:45 vm07 bash[17804]: cluster 2026-03-10T11:34:43.971097+0000 mgr.x (mgr.24733) 98 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:45 vm07 bash[17804]: audit 2026-03-10T11:34:44.152730+0000 mon.b (mon.2) 174 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:34:47.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:47 vm05 bash[22470]: cluster 2026-03-10T11:34:45.971699+0000 mgr.x (mgr.24733) 99 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:47.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:47 vm05 bash[17453]: cluster 2026-03-10T11:34:45.971699+0000 mgr.x (mgr.24733) 99 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:47 vm07 bash[17804]: cluster 2026-03-10T11:34:45.971699+0000 mgr.x (mgr.24733) 99 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:47.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:46 vm07 bash[38631]: ts=2026-03-10T11:34:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:47.727475+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:47.734366+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: cluster 2026-03-10T11:34:47.972027+0000 mgr.x (mgr.24733) 100 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:48.015678+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:48.021722+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:48.253621+0000 mgr.x (mgr.24733) 101 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:48.307680+0000 mon.b (mon.2) 175 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:48.308291+0000 mon.b (mon.2) 176 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:48 vm05 bash[22470]: audit 2026-03-10T11:34:48.313464+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:47.727475+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:47.734366+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: cluster 2026-03-10T11:34:47.972027+0000 mgr.x (mgr.24733) 100 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:48.015678+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:48.021722+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:48.253621+0000 mgr.x (mgr.24733) 101 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:48.307680+0000 mon.b (mon.2) 175 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:48.308291+0000 mon.b (mon.2) 176 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:34:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:48 vm05 bash[17453]: audit 2026-03-10T11:34:48.313464+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:47.727475+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:47.734366+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: cluster 2026-03-10T11:34:47.972027+0000 mgr.x (mgr.24733) 100 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:48.015678+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:48.021722+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:48.253621+0000 mgr.x (mgr.24733) 101 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:48.307680+0000 mon.b (mon.2) 175 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:48.308291+0000 mon.b (mon.2) 176 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:34:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:48 vm07 bash[17804]: audit 2026-03-10T11:34:48.313464+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:34:51.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:51 vm05 bash[22470]: cluster 2026-03-10T11:34:49.972355+0000 mgr.x (mgr.24733) 102 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:51.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:51 vm05 bash[17453]: cluster 2026-03-10T11:34:49.972355+0000 mgr.x (mgr.24733) 102 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:51 vm07 bash[17804]: cluster 2026-03-10T11:34:49.972355+0000 mgr.x (mgr.24733) 102 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:53.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:53 vm05 bash[22470]: cluster 2026-03-10T11:34:51.972810+0000 mgr.x (mgr.24733) 103 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:53.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:53 vm05 bash[17453]: cluster 2026-03-10T11:34:51.972810+0000 mgr.x (mgr.24733) 103 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:53 vm07 bash[17804]: cluster 2026-03-10T11:34:51.972810+0000 mgr.x (mgr.24733) 103 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:54.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:54 vm07 bash[38631]: ts=2026-03-10T11:34:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:34:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:34:54] "GET /metrics HTTP/1.1" 200 37515 "" "Prometheus/2.51.0" 2026-03-10T11:34:55.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:55 vm05 bash[22470]: cluster 2026-03-10T11:34:53.973096+0000 mgr.x (mgr.24733) 104 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:55.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:55 vm05 bash[17453]: cluster 2026-03-10T11:34:53.973096+0000 mgr.x (mgr.24733) 104 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:55 vm07 bash[17804]: cluster 2026-03-10T11:34:53.973096+0000 mgr.x (mgr.24733) 104 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:57.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:57 vm05 bash[17453]: cluster 2026-03-10T11:34:55.973585+0000 mgr.x (mgr.24733) 105 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:57.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:57 vm05 bash[22470]: cluster 2026-03-10T11:34:55.973585+0000 mgr.x (mgr.24733) 105 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:57 vm07 bash[17804]: cluster 2026-03-10T11:34:55.973585+0000 mgr.x (mgr.24733) 105 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:34:57.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:34:56 vm07 bash[38631]: ts=2026-03-10T11:34:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:34:59.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:58 vm05 bash[22470]: cluster 2026-03-10T11:34:57.973958+0000 mgr.x (mgr.24733) 106 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:59.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:58 vm05 bash[22470]: audit 2026-03-10T11:34:58.261397+0000 mgr.x (mgr.24733) 107 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:59.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:58 vm05 bash[17453]: cluster 2026-03-10T11:34:57.973958+0000 mgr.x (mgr.24733) 106 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:59.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:58 vm05 bash[17453]: audit 2026-03-10T11:34:58.261397+0000 mgr.x (mgr.24733) 107 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:34:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:58 vm07 bash[17804]: cluster 2026-03-10T11:34:57.973958+0000 mgr.x (mgr.24733) 106 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:34:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:58 vm07 bash[17804]: audit 2026-03-10T11:34:58.261397+0000 mgr.x (mgr.24733) 107 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:00.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:34:59 vm05 bash[22470]: audit 2026-03-10T11:34:59.152997+0000 mon.b (mon.2) 177 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:00.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:34:59 vm05 bash[17453]: audit 2026-03-10T11:34:59.152997+0000 mon.b (mon.2) 177 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:34:59 vm07 bash[17804]: audit 2026-03-10T11:34:59.152997+0000 mon.b (mon.2) 177 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:01.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:00 vm05 bash[22470]: cluster 2026-03-10T11:34:59.974281+0000 mgr.x (mgr.24733) 108 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:00 vm05 bash[17453]: cluster 2026-03-10T11:34:59.974281+0000 mgr.x (mgr.24733) 108 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:01.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:00 vm07 bash[17804]: cluster 2026-03-10T11:34:59.974281+0000 mgr.x (mgr.24733) 108 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:03.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:03 vm05 bash[22470]: cluster 2026-03-10T11:35:01.974825+0000 mgr.x (mgr.24733) 109 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:03.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:03 vm05 bash[17453]: cluster 2026-03-10T11:35:01.974825+0000 mgr.x (mgr.24733) 109 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:03 vm07 bash[17804]: cluster 2026-03-10T11:35:01.974825+0000 mgr.x (mgr.24733) 109 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:04.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:04 vm07 bash[38631]: ts=2026-03-10T11:35:04.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:35:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:35:04] "GET /metrics HTTP/1.1" 200 37518 "" "Prometheus/2.51.0" 2026-03-10T11:35:05.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:05 vm05 bash[22470]: cluster 2026-03-10T11:35:03.975136+0000 mgr.x (mgr.24733) 110 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:05.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:05 vm05 bash[17453]: cluster 2026-03-10T11:35:03.975136+0000 mgr.x (mgr.24733) 110 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:05 vm07 bash[17804]: cluster 2026-03-10T11:35:03.975136+0000 mgr.x (mgr.24733) 110 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:07.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:07 vm05 bash[17453]: cluster 2026-03-10T11:35:05.975769+0000 mgr.x (mgr.24733) 111 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:07.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:07 vm05 bash[22470]: cluster 2026-03-10T11:35:05.975769+0000 mgr.x (mgr.24733) 111 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:07 vm07 bash[17804]: cluster 2026-03-10T11:35:05.975769+0000 mgr.x (mgr.24733) 111 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:07.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:06 vm07 bash[38631]: ts=2026-03-10T11:35:06.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:09.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:08 vm05 bash[22470]: cluster 2026-03-10T11:35:07.976176+0000 mgr.x (mgr.24733) 112 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:09.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:08 vm05 bash[22470]: audit 2026-03-10T11:35:08.271964+0000 mgr.x (mgr.24733) 113 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:09.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:08 vm05 bash[17453]: cluster 2026-03-10T11:35:07.976176+0000 mgr.x (mgr.24733) 112 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:09.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:08 vm05 bash[17453]: audit 2026-03-10T11:35:08.271964+0000 mgr.x (mgr.24733) 113 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:08 vm07 bash[17804]: cluster 2026-03-10T11:35:07.976176+0000 mgr.x (mgr.24733) 112 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:08 vm07 bash[17804]: audit 2026-03-10T11:35:08.271964+0000 mgr.x (mgr.24733) 113 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:11.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:11 vm05 bash[22470]: cluster 2026-03-10T11:35:09.976508+0000 mgr.x (mgr.24733) 114 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:11.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:11 vm05 bash[17453]: cluster 2026-03-10T11:35:09.976508+0000 mgr.x (mgr.24733) 114 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:11 vm07 bash[17804]: cluster 2026-03-10T11:35:09.976508+0000 mgr.x (mgr.24733) 114 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:13.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:13 vm05 bash[22470]: cluster 2026-03-10T11:35:11.977051+0000 mgr.x (mgr.24733) 115 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:13.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:13 vm05 bash[17453]: cluster 2026-03-10T11:35:11.977051+0000 mgr.x (mgr.24733) 115 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:13 vm07 bash[17804]: cluster 2026-03-10T11:35:11.977051+0000 mgr.x (mgr.24733) 115 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:14.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:14 vm07 bash[38631]: ts=2026-03-10T11:35:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:35:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:35:14] "GET /metrics HTTP/1.1" 200 37515 "" "Prometheus/2.51.0" 2026-03-10T11:35:15.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:15 vm05 bash[22470]: cluster 2026-03-10T11:35:13.977368+0000 mgr.x (mgr.24733) 116 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:15.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:15 vm05 bash[22470]: audit 2026-03-10T11:35:14.153150+0000 mon.b (mon.2) 178 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:15.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:15 vm05 bash[17453]: cluster 2026-03-10T11:35:13.977368+0000 mgr.x (mgr.24733) 116 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:15.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:15 vm05 bash[17453]: audit 2026-03-10T11:35:14.153150+0000 mon.b (mon.2) 178 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:15 vm07 bash[17804]: cluster 2026-03-10T11:35:13.977368+0000 mgr.x (mgr.24733) 116 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:15 vm07 bash[17804]: audit 2026-03-10T11:35:14.153150+0000 mon.b (mon.2) 178 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:17 vm05 bash[22470]: cluster 2026-03-10T11:35:15.977865+0000 mgr.x (mgr.24733) 117 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:17.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:17 vm05 bash[17453]: cluster 2026-03-10T11:35:15.977865+0000 mgr.x (mgr.24733) 117 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:17 vm07 bash[17804]: cluster 2026-03-10T11:35:15.977865+0000 mgr.x (mgr.24733) 117 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:17.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:16 vm07 bash[38631]: ts=2026-03-10T11:35:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:19.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:18 vm05 bash[17453]: cluster 2026-03-10T11:35:17.978169+0000 mgr.x (mgr.24733) 118 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:19.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:18 vm05 bash[17453]: audit 2026-03-10T11:35:18.281458+0000 mgr.x (mgr.24733) 119 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:19.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:18 vm05 bash[22470]: cluster 2026-03-10T11:35:17.978169+0000 mgr.x (mgr.24733) 118 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:19.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:18 vm05 bash[22470]: audit 2026-03-10T11:35:18.281458+0000 mgr.x (mgr.24733) 119 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:18 vm07 bash[17804]: cluster 2026-03-10T11:35:17.978169+0000 mgr.x (mgr.24733) 118 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:18 vm07 bash[17804]: audit 2026-03-10T11:35:18.281458+0000 mgr.x (mgr.24733) 119 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:21.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:21 vm05 bash[17453]: cluster 2026-03-10T11:35:19.978543+0000 mgr.x (mgr.24733) 120 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:21.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:21 vm05 bash[22470]: cluster 2026-03-10T11:35:19.978543+0000 mgr.x (mgr.24733) 120 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:21 vm07 bash[17804]: cluster 2026-03-10T11:35:19.978543+0000 mgr.x (mgr.24733) 120 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:23.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:23 vm05 bash[22470]: cluster 2026-03-10T11:35:21.979066+0000 mgr.x (mgr.24733) 121 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:23.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:23 vm05 bash[17453]: cluster 2026-03-10T11:35:21.979066+0000 mgr.x (mgr.24733) 121 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:23 vm07 bash[17804]: cluster 2026-03-10T11:35:21.979066+0000 mgr.x (mgr.24733) 121 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:24.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:24 vm07 bash[38631]: ts=2026-03-10T11:35:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:24.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:35:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:35:24] "GET /metrics HTTP/1.1" 200 37515 "" "Prometheus/2.51.0" 2026-03-10T11:35:25.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:25 vm05 bash[22470]: cluster 2026-03-10T11:35:23.979466+0000 mgr.x (mgr.24733) 122 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:25.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:25 vm05 bash[17453]: cluster 2026-03-10T11:35:23.979466+0000 mgr.x (mgr.24733) 122 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:25 vm07 bash[17804]: cluster 2026-03-10T11:35:23.979466+0000 mgr.x (mgr.24733) 122 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:27.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:27 vm05 bash[22470]: cluster 2026-03-10T11:35:25.980033+0000 mgr.x (mgr.24733) 123 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:27.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:27 vm05 bash[17453]: cluster 2026-03-10T11:35:25.980033+0000 mgr.x (mgr.24733) 123 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:27 vm07 bash[17804]: cluster 2026-03-10T11:35:25.980033+0000 mgr.x (mgr.24733) 123 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:27.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:26 vm07 bash[38631]: ts=2026-03-10T11:35:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:29.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:28 vm05 bash[17453]: cluster 2026-03-10T11:35:27.980373+0000 mgr.x (mgr.24733) 124 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:29.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:28 vm05 bash[17453]: audit 2026-03-10T11:35:28.288428+0000 mgr.x (mgr.24733) 125 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:29.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:28 vm05 bash[22470]: cluster 2026-03-10T11:35:27.980373+0000 mgr.x (mgr.24733) 124 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:29.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:28 vm05 bash[22470]: audit 2026-03-10T11:35:28.288428+0000 mgr.x (mgr.24733) 125 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:28 vm07 bash[17804]: cluster 2026-03-10T11:35:27.980373+0000 mgr.x (mgr.24733) 124 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:28 vm07 bash[17804]: audit 2026-03-10T11:35:28.288428+0000 mgr.x (mgr.24733) 125 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:30.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:29 vm05 bash[22470]: audit 2026-03-10T11:35:29.153330+0000 mon.b (mon.2) 179 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:30.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:29 vm05 bash[17453]: audit 2026-03-10T11:35:29.153330+0000 mon.b (mon.2) 179 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:30.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:29 vm07 bash[17804]: audit 2026-03-10T11:35:29.153330+0000 mon.b (mon.2) 179 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:31.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:30 vm05 bash[22470]: cluster 2026-03-10T11:35:29.980765+0000 mgr.x (mgr.24733) 126 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:31.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:30 vm05 bash[17453]: cluster 2026-03-10T11:35:29.980765+0000 mgr.x (mgr.24733) 126 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:30 vm07 bash[17804]: cluster 2026-03-10T11:35:29.980765+0000 mgr.x (mgr.24733) 126 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:33.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:33 vm05 bash[22470]: cluster 2026-03-10T11:35:31.981321+0000 mgr.x (mgr.24733) 127 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:33 vm05 bash[17453]: cluster 2026-03-10T11:35:31.981321+0000 mgr.x (mgr.24733) 127 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:33 vm07 bash[17804]: cluster 2026-03-10T11:35:31.981321+0000 mgr.x (mgr.24733) 127 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:34.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:34 vm07 bash[38631]: ts=2026-03-10T11:35:34.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:35:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:35:34] "GET /metrics HTTP/1.1" 200 37530 "" "Prometheus/2.51.0" 2026-03-10T11:35:35.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:35 vm05 bash[17453]: cluster 2026-03-10T11:35:33.981603+0000 mgr.x (mgr.24733) 128 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:35.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:35 vm05 bash[22470]: cluster 2026-03-10T11:35:33.981603+0000 mgr.x (mgr.24733) 128 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:35 vm07 bash[17804]: cluster 2026-03-10T11:35:33.981603+0000 mgr.x (mgr.24733) 128 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:37 vm05 bash[17453]: cluster 2026-03-10T11:35:35.982092+0000 mgr.x (mgr.24733) 129 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:37.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:37 vm05 bash[22470]: cluster 2026-03-10T11:35:35.982092+0000 mgr.x (mgr.24733) 129 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:37 vm07 bash[17804]: cluster 2026-03-10T11:35:35.982092+0000 mgr.x (mgr.24733) 129 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:37.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:36 vm07 bash[38631]: ts=2026-03-10T11:35:36.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:39.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:38 vm05 bash[22470]: cluster 2026-03-10T11:35:37.982395+0000 mgr.x (mgr.24733) 130 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:39.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:38 vm05 bash[22470]: audit 2026-03-10T11:35:38.295168+0000 mgr.x (mgr.24733) 131 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:38 vm05 bash[17453]: cluster 2026-03-10T11:35:37.982395+0000 mgr.x (mgr.24733) 130 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:38 vm05 bash[17453]: audit 2026-03-10T11:35:38.295168+0000 mgr.x (mgr.24733) 131 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:38 vm07 bash[17804]: cluster 2026-03-10T11:35:37.982395+0000 mgr.x (mgr.24733) 130 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:38 vm07 bash[17804]: audit 2026-03-10T11:35:38.295168+0000 mgr.x (mgr.24733) 131 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:41.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:41 vm05 bash[22470]: cluster 2026-03-10T11:35:39.982750+0000 mgr.x (mgr.24733) 132 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:41.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:41 vm05 bash[17453]: cluster 2026-03-10T11:35:39.982750+0000 mgr.x (mgr.24733) 132 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:41.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:41 vm07 bash[17804]: cluster 2026-03-10T11:35:39.982750+0000 mgr.x (mgr.24733) 132 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:43.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:43 vm05 bash[22470]: cluster 2026-03-10T11:35:41.983235+0000 mgr.x (mgr.24733) 133 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:43.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:43 vm05 bash[17453]: cluster 2026-03-10T11:35:41.983235+0000 mgr.x (mgr.24733) 133 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:43 vm07 bash[17804]: cluster 2026-03-10T11:35:41.983235+0000 mgr.x (mgr.24733) 133 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:44.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:44 vm07 bash[38631]: ts=2026-03-10T11:35:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:35:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:35:44] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:35:45.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:45 vm05 bash[22470]: cluster 2026-03-10T11:35:43.983595+0000 mgr.x (mgr.24733) 134 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:45.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:45 vm05 bash[22470]: audit 2026-03-10T11:35:44.153569+0000 mon.b (mon.2) 180 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:45.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:45 vm05 bash[17453]: cluster 2026-03-10T11:35:43.983595+0000 mgr.x (mgr.24733) 134 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:45.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:45 vm05 bash[17453]: audit 2026-03-10T11:35:44.153569+0000 mon.b (mon.2) 180 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:45 vm07 bash[17804]: cluster 2026-03-10T11:35:43.983595+0000 mgr.x (mgr.24733) 134 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:45 vm07 bash[17804]: audit 2026-03-10T11:35:44.153569+0000 mon.b (mon.2) 180 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:35:47.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:47 vm05 bash[22470]: cluster 2026-03-10T11:35:45.984164+0000 mgr.x (mgr.24733) 135 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:47.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:47 vm05 bash[17453]: cluster 2026-03-10T11:35:45.984164+0000 mgr.x (mgr.24733) 135 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:47 vm07 bash[17804]: cluster 2026-03-10T11:35:45.984164+0000 mgr.x (mgr.24733) 135 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:47.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:46 vm07 bash[38631]: ts=2026-03-10T11:35:46.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:48 vm05 bash[22470]: cluster 2026-03-10T11:35:47.984463+0000 mgr.x (mgr.24733) 136 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:48 vm05 bash[22470]: audit 2026-03-10T11:35:48.302994+0000 mgr.x (mgr.24733) 137 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:49.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:48 vm05 bash[22470]: audit 2026-03-10T11:35:48.350120+0000 mon.b (mon.2) 181 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:35:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:48 vm05 bash[17453]: cluster 2026-03-10T11:35:47.984463+0000 mgr.x (mgr.24733) 136 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:48 vm05 bash[17453]: audit 2026-03-10T11:35:48.302994+0000 mgr.x (mgr.24733) 137 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:49.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:48 vm05 bash[17453]: audit 2026-03-10T11:35:48.350120+0000 mon.b (mon.2) 181 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:35:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:48 vm07 bash[17804]: cluster 2026-03-10T11:35:47.984463+0000 mgr.x (mgr.24733) 136 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:48 vm07 bash[17804]: audit 2026-03-10T11:35:48.302994+0000 mgr.x (mgr.24733) 137 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:48 vm07 bash[17804]: audit 2026-03-10T11:35:48.350120+0000 mon.b (mon.2) 181 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:35:51.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:51 vm05 bash[17453]: cluster 2026-03-10T11:35:49.984796+0000 mgr.x (mgr.24733) 138 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:51.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:51 vm05 bash[22470]: cluster 2026-03-10T11:35:49.984796+0000 mgr.x (mgr.24733) 138 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:51 vm07 bash[17804]: cluster 2026-03-10T11:35:49.984796+0000 mgr.x (mgr.24733) 138 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:53.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:53 vm05 bash[17453]: cluster 2026-03-10T11:35:51.985315+0000 mgr.x (mgr.24733) 139 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:53.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:53 vm05 bash[22470]: cluster 2026-03-10T11:35:51.985315+0000 mgr.x (mgr.24733) 139 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:53.413 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:53 vm07 bash[17804]: cluster 2026-03-10T11:35:51.985315+0000 mgr.x (mgr.24733) 139 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:54.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:54 vm07 bash[38631]: ts=2026-03-10T11:35:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:35:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:35:54] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.594558+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.600191+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.606357+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.611825+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.895985+0000 mon.b (mon.2) 182 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.896493+0000 mon.b (mon.2) 183 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: audit 2026-03-10T11:35:53.902044+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:54.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:54 vm07 bash[17804]: cluster 2026-03-10T11:35:53.985578+0000 mgr.x (mgr.24733) 140 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.594558+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.600191+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.606357+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.611825+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.895985+0000 mon.b (mon.2) 182 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.896493+0000 mon.b (mon.2) 183 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: audit 2026-03-10T11:35:53.902044+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:54 vm05 bash[22470]: cluster 2026-03-10T11:35:53.985578+0000 mgr.x (mgr.24733) 140 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.594558+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.600191+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.606357+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.611825+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.895985+0000 mon.b (mon.2) 182 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.896493+0000 mon.b (mon.2) 183 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: audit 2026-03-10T11:35:53.902044+0000 mon.a (mon.0) 866 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:35:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:54 vm05 bash[17453]: cluster 2026-03-10T11:35:53.985578+0000 mgr.x (mgr.24733) 140 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:57.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:57 vm05 bash[22470]: cluster 2026-03-10T11:35:55.986060+0000 mgr.x (mgr.24733) 141 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:57.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:57 vm05 bash[17453]: cluster 2026-03-10T11:35:55.986060+0000 mgr.x (mgr.24733) 141 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:57 vm07 bash[17804]: cluster 2026-03-10T11:35:55.986060+0000 mgr.x (mgr.24733) 141 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:35:57.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:35:56 vm07 bash[38631]: ts=2026-03-10T11:35:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:35:59.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:58 vm05 bash[22470]: cluster 2026-03-10T11:35:57.986363+0000 mgr.x (mgr.24733) 142 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:59.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:58 vm05 bash[22470]: audit 2026-03-10T11:35:58.312906+0000 mgr.x (mgr.24733) 143 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:59.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:58 vm05 bash[17453]: cluster 2026-03-10T11:35:57.986363+0000 mgr.x (mgr.24733) 142 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:59.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:58 vm05 bash[17453]: audit 2026-03-10T11:35:58.312906+0000 mgr.x (mgr.24733) 143 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:35:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:58 vm07 bash[17804]: cluster 2026-03-10T11:35:57.986363+0000 mgr.x (mgr.24733) 142 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:35:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:58 vm07 bash[17804]: audit 2026-03-10T11:35:58.312906+0000 mgr.x (mgr.24733) 143 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:00.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:35:59 vm05 bash[22470]: audit 2026-03-10T11:35:59.153754+0000 mon.b (mon.2) 184 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:00.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:35:59 vm05 bash[17453]: audit 2026-03-10T11:35:59.153754+0000 mon.b (mon.2) 184 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:35:59 vm07 bash[17804]: audit 2026-03-10T11:35:59.153754+0000 mon.b (mon.2) 184 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:01.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:00 vm05 bash[22470]: cluster 2026-03-10T11:35:59.986720+0000 mgr.x (mgr.24733) 144 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:00 vm05 bash[17453]: cluster 2026-03-10T11:35:59.986720+0000 mgr.x (mgr.24733) 144 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:01.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:00 vm07 bash[17804]: cluster 2026-03-10T11:35:59.986720+0000 mgr.x (mgr.24733) 144 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:03 vm05 bash[22470]: cluster 2026-03-10T11:36:01.987144+0000 mgr.x (mgr.24733) 145 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:03.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:03 vm05 bash[17453]: cluster 2026-03-10T11:36:01.987144+0000 mgr.x (mgr.24733) 145 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:03 vm07 bash[17804]: cluster 2026-03-10T11:36:01.987144+0000 mgr.x (mgr.24733) 145 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:04.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:04 vm07 bash[38631]: ts=2026-03-10T11:36:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:36:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:36:04] "GET /metrics HTTP/1.1" 200 37527 "" "Prometheus/2.51.0" 2026-03-10T11:36:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:05 vm05 bash[22470]: cluster 2026-03-10T11:36:03.987489+0000 mgr.x (mgr.24733) 146 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:05.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:05 vm05 bash[17453]: cluster 2026-03-10T11:36:03.987489+0000 mgr.x (mgr.24733) 146 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:05 vm07 bash[17804]: cluster 2026-03-10T11:36:03.987489+0000 mgr.x (mgr.24733) 146 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:07.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:07 vm05 bash[22470]: cluster 2026-03-10T11:36:05.987938+0000 mgr.x (mgr.24733) 147 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:07.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:07 vm05 bash[17453]: cluster 2026-03-10T11:36:05.987938+0000 mgr.x (mgr.24733) 147 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:07 vm07 bash[17804]: cluster 2026-03-10T11:36:05.987938+0000 mgr.x (mgr.24733) 147 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:07.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:06 vm07 bash[38631]: ts=2026-03-10T11:36:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:09.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:08 vm05 bash[22470]: cluster 2026-03-10T11:36:07.988251+0000 mgr.x (mgr.24733) 148 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:09.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:08 vm05 bash[22470]: audit 2026-03-10T11:36:08.319819+0000 mgr.x (mgr.24733) 149 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:09.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:08 vm05 bash[17453]: cluster 2026-03-10T11:36:07.988251+0000 mgr.x (mgr.24733) 148 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:09.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:08 vm05 bash[17453]: audit 2026-03-10T11:36:08.319819+0000 mgr.x (mgr.24733) 149 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:08 vm07 bash[17804]: cluster 2026-03-10T11:36:07.988251+0000 mgr.x (mgr.24733) 148 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:08 vm07 bash[17804]: audit 2026-03-10T11:36:08.319819+0000 mgr.x (mgr.24733) 149 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:11.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:11 vm05 bash[22470]: cluster 2026-03-10T11:36:09.988656+0000 mgr.x (mgr.24733) 150 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:11.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:11 vm05 bash[17453]: cluster 2026-03-10T11:36:09.988656+0000 mgr.x (mgr.24733) 150 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:11 vm07 bash[17804]: cluster 2026-03-10T11:36:09.988656+0000 mgr.x (mgr.24733) 150 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:13.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:13 vm05 bash[22470]: cluster 2026-03-10T11:36:11.989150+0000 mgr.x (mgr.24733) 151 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:13.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:13 vm05 bash[17453]: cluster 2026-03-10T11:36:11.989150+0000 mgr.x (mgr.24733) 151 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:13 vm07 bash[17804]: cluster 2026-03-10T11:36:11.989150+0000 mgr.x (mgr.24733) 151 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:14.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:14 vm07 bash[38631]: ts=2026-03-10T11:36:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:14.449 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch daemon redeploy "mgr.$(ceph mgr dump -f json | jq .standbys | jq .[] | jq -r .name)" --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-10T11:36:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:36:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:36:14] "GET /metrics HTTP/1.1" 200 37528 "" "Prometheus/2.51.0" 2026-03-10T11:36:15.114 INFO:teuthology.orchestra.run.vm05.stdout:Scheduled to redeploy mgr.y on host 'vm05' 2026-03-10T11:36:15.183 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps --refresh' 2026-03-10T11:36:15.326 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:15 vm05 bash[22470]: cluster 2026-03-10T11:36:13.989428+0000 mgr.x (mgr.24733) 152 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:15.326 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:15 vm05 bash[22470]: audit 2026-03-10T11:36:14.153819+0000 mon.b (mon.2) 185 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:15.326 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:15 vm05 bash[22470]: audit 2026-03-10T11:36:14.894694+0000 mon.c (mon.1) 47 : audit [DBG] from='client.? 192.168.123.105:0/2000019602' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:36:15.326 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:15 vm05 bash[17453]: cluster 2026-03-10T11:36:13.989428+0000 mgr.x (mgr.24733) 152 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:15.326 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:15 vm05 bash[17453]: audit 2026-03-10T11:36:14.153819+0000 mon.b (mon.2) 185 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:15.326 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:15 vm05 bash[17453]: audit 2026-03-10T11:36:14.894694+0000 mon.c (mon.1) 47 : audit [DBG] from='client.? 192.168.123.105:0/2000019602' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:36:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:15 vm07 bash[17804]: cluster 2026-03-10T11:36:13.989428+0000 mgr.x (mgr.24733) 152 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:15 vm07 bash[17804]: audit 2026-03-10T11:36:14.153819+0000 mon.b (mon.2) 185 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:15 vm07 bash[17804]: audit 2026-03-10T11:36:14.894694+0000 mon.c (mon.1) 47 : audit [DBG] from='client.? 192.168.123.105:0/2000019602' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (2m) 22s ago 9m 13.8M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (2m) 22s ago 9m 38.1M - dad864ee21e9 ea7bd1695c30 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 22s ago 9m 42.0M - 3.5 e1d6a67b021e 71be9fb90a88 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283 running (4m) 22s ago 12m 529M - 19.2.3-678-ge911bdeb 654f31e6858e 29cf7638c524 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:9283 running (13m) 22s ago 13m 400M - 17.2.0 e1d6a67b021e c74ea9550b91 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (13m) 22s ago 13m 54.3M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (12m) 22s ago 12m 42.3M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (12m) 22s ago 12m 37.5M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (2m) 22s ago 9m 7547k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (2m) 22s ago 9m 7579k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (12m) 22s ago 12m 50.2M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (11m) 22s ago 11m 53.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (11m) 22s ago 11m 49.5M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (11m) 22s ago 11m 50.7M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (11m) 22s ago 11m 50.5M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (10m) 22s ago 10m 48.2M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (10m) 22s ago 10m 46.8M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (10m) 22s ago 10m 49.0M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 22s ago 9m 38.3M - 2.51.0 1d3b7f56885b 42d6386fa908 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (9m) 22s ago 9m 83.7M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:36:15.608 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (9m) 22s ago 9m 84.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:36:15.653 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-10T11:36:16.438 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:16 vm05 bash[22470]: audit 2026-03-10T11:36:15.095295+0000 mgr.x (mgr.24733) 153 : audit [DBG] from='client.14946 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.y", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:36:16.438 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:16 vm05 bash[22470]: audit 2026-03-10T11:36:15.101048+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.438 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:16 vm05 bash[22470]: cephadm 2026-03-10T11:36:15.102721+0000 mgr.x (mgr.24733) 154 : cephadm [INF] Schedule redeploy daemon mgr.y 2026-03-10T11:36:16.438 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:16 vm05 bash[22470]: audit 2026-03-10T11:36:15.107152+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.438 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:16 vm05 bash[22470]: audit 2026-03-10T11:36:15.114557+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.438 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:16 vm05 bash[22470]: audit 2026-03-10T11:36:15.116356+0000 mon.b (mon.2) 186 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:36:16.444 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:16 vm05 bash[17453]: audit 2026-03-10T11:36:15.095295+0000 mgr.x (mgr.24733) 153 : audit [DBG] from='client.14946 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.y", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:36:16.444 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:16 vm05 bash[17453]: audit 2026-03-10T11:36:15.101048+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.444 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:16 vm05 bash[17453]: cephadm 2026-03-10T11:36:15.102721+0000 mgr.x (mgr.24733) 154 : cephadm [INF] Schedule redeploy daemon mgr.y 2026-03-10T11:36:16.444 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:16 vm05 bash[17453]: audit 2026-03-10T11:36:15.107152+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.444 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:16 vm05 bash[17453]: audit 2026-03-10T11:36:15.114557+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.444 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:16 vm05 bash[17453]: audit 2026-03-10T11:36:15.116356+0000 mon.b (mon.2) 186 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:36:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:16 vm07 bash[17804]: audit 2026-03-10T11:36:15.095295+0000 mgr.x (mgr.24733) 153 : audit [DBG] from='client.14946 -' entity='client.admin' cmd=[{"prefix": "orch daemon redeploy", "name": "mgr.y", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:36:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:16 vm07 bash[17804]: audit 2026-03-10T11:36:15.101048+0000 mon.a (mon.0) 867 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:16 vm07 bash[17804]: cephadm 2026-03-10T11:36:15.102721+0000 mgr.x (mgr.24733) 154 : cephadm [INF] Schedule redeploy daemon mgr.y 2026-03-10T11:36:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:16 vm07 bash[17804]: audit 2026-03-10T11:36:15.107152+0000 mon.a (mon.0) 868 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:16 vm07 bash[17804]: audit 2026-03-10T11:36:15.114557+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:16.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:16 vm07 bash[17804]: audit 2026-03-10T11:36:15.116356+0000 mon.b (mon.2) 186 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:36:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:17 vm07 bash[17804]: audit 2026-03-10T11:36:15.605403+0000 mgr.x (mgr.24733) 155 : audit [DBG] from='client.14952 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:36:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:17 vm07 bash[17804]: cluster 2026-03-10T11:36:15.989940+0000 mgr.x (mgr.24733) 156 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:17 vm07 bash[17804]: audit 2026-03-10T11:36:16.595786+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:17.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:16 vm07 bash[38631]: ts=2026-03-10T11:36:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:17.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:17 vm05 bash[22470]: audit 2026-03-10T11:36:15.605403+0000 mgr.x (mgr.24733) 155 : audit [DBG] from='client.14952 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:36:17.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:17 vm05 bash[22470]: cluster 2026-03-10T11:36:15.989940+0000 mgr.x (mgr.24733) 156 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:17.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:17 vm05 bash[22470]: audit 2026-03-10T11:36:16.595786+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:17.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:17 vm05 bash[17453]: audit 2026-03-10T11:36:15.605403+0000 mgr.x (mgr.24733) 155 : audit [DBG] from='client.14952 -' entity='client.admin' cmd=[{"prefix": "orch ps", "refresh": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:36:17.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:17 vm05 bash[17453]: cluster 2026-03-10T11:36:15.989940+0000 mgr.x (mgr.24733) 156 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:17.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:17 vm05 bash[17453]: audit 2026-03-10T11:36:16.595786+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:19.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:18 vm05 bash[17453]: cluster 2026-03-10T11:36:17.990288+0000 mgr.x (mgr.24733) 157 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:19.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:18 vm05 bash[17453]: audit 2026-03-10T11:36:18.328495+0000 mgr.x (mgr.24733) 158 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:19.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:18 vm05 bash[22470]: cluster 2026-03-10T11:36:17.990288+0000 mgr.x (mgr.24733) 157 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:19.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:18 vm05 bash[22470]: audit 2026-03-10T11:36:18.328495+0000 mgr.x (mgr.24733) 158 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:18 vm07 bash[17804]: cluster 2026-03-10T11:36:17.990288+0000 mgr.x (mgr.24733) 157 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:18 vm07 bash[17804]: audit 2026-03-10T11:36:18.328495+0000 mgr.x (mgr.24733) 158 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:21.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:21 vm05 bash[17453]: cluster 2026-03-10T11:36:19.990888+0000 mgr.x (mgr.24733) 159 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:21.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:21 vm05 bash[22470]: cluster 2026-03-10T11:36:19.990888+0000 mgr.x (mgr.24733) 159 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:21 vm07 bash[17804]: cluster 2026-03-10T11:36:19.990888+0000 mgr.x (mgr.24733) 159 : cluster [DBG] pgmap v103: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:22.457 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.457 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: Stopping Ceph mgr.y for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:36:22.461 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.461 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 bash[53785]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mgr-y 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service: Failed with result 'exit-code'. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: Stopped Ceph mgr.y for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.721 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.722 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.722 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 systemd[1]: Started Ceph mgr.y for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 bash[53899]: debug 2026-03-10T11:36:22.934+0000 7f2f0757d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.823192+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.829208+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.855061+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.860421+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.860874+0000 mon.b (mon.2) 187 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.861373+0000 mon.b (mon.2) 188 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.865779+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.876508+0000 mon.b (mon.2) 189 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.877293+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.877432+0000 mon.b (mon.2) 190 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:21.877935+0000 mon.b (mon.2) 191 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:22.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: cephadm 2026-03-10T11:36:21.878398+0000 mgr.x (mgr.24733) 160 : cephadm [INF] Deploying daemon mgr.y on vm05 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: cluster 2026-03-10T11:36:21.991307+0000 mgr.x (mgr.24733) 161 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:22.750955+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:22.757076+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:22.762338+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:22.771995+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:22 vm05 bash[22470]: audit 2026-03-10T11:36:22.804692+0000 mon.b (mon.2) 192 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.823192+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.829208+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.855061+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.860421+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.860874+0000 mon.b (mon.2) 187 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.861373+0000 mon.b (mon.2) 188 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.865779+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.876508+0000 mon.b (mon.2) 189 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.877293+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.877432+0000 mon.b (mon.2) 190 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:21.877935+0000 mon.b (mon.2) 191 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: cephadm 2026-03-10T11:36:21.878398+0000 mgr.x (mgr.24733) 160 : cephadm [INF] Deploying daemon mgr.y on vm05 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: cluster 2026-03-10T11:36:21.991307+0000 mgr.x (mgr.24733) 161 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:22.750955+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:22.757076+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:22.762338+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:22.771995+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:22.974 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:22 vm05 bash[17453]: audit 2026-03-10T11:36:22.804692+0000 mon.b (mon.2) 192 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.823192+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.829208+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.855061+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.860421+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.860874+0000 mon.b (mon.2) 187 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.861373+0000 mon.b (mon.2) 188 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.865779+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.876508+0000 mon.b (mon.2) 189 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.877293+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24733 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.877432+0000 mon.b (mon.2) 190 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:21.877935+0000 mon.b (mon.2) 191 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: cephadm 2026-03-10T11:36:21.878398+0000 mgr.x (mgr.24733) 160 : cephadm [INF] Deploying daemon mgr.y on vm05 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: cluster 2026-03-10T11:36:21.991307+0000 mgr.x (mgr.24733) 161 : cluster [DBG] pgmap v104: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:22.750955+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:22.757076+0000 mon.a (mon.0) 878 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:22.762338+0000 mon.a (mon.0) 879 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:22.771995+0000 mon.a (mon.0) 880 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:23.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:22 vm07 bash[17804]: audit 2026-03-10T11:36:22.804692+0000 mon.b (mon.2) 192 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:36:23.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:22 vm05 bash[53899]: debug 2026-03-10T11:36:22.970+0000 7f2f0757d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:36:23.343 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: debug 2026-03-10T11:36:23.094+0000 7f2f0757d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:36:23.763 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: debug 2026-03-10T11:36:23.362+0000 7f2f0757d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:36:24.085 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: debug 2026-03-10T11:36:23.762+0000 7f2f0757d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:36:24.085 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: debug 2026-03-10T11:36:23.838+0000 7f2f0757d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:36:24.085 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:36:24.085 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:36:24.085 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: from numpy import show_config as show_numpy_config 2026-03-10T11:36:24.085 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:23 vm05 bash[53899]: debug 2026-03-10T11:36:23.946+0000 7f2f0757d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:36:24.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.086+0000 7f2f0757d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:36:24.343 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.118+0000 7f2f0757d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:36:24.343 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.150+0000 7f2f0757d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:36:24.343 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.190+0000 7f2f0757d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:36:24.343 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.234+0000 7f2f0757d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:36:24.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:24 vm07 bash[38631]: ts=2026-03-10T11:36:24.148Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:24.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:36:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:36:24] "GET /metrics HTTP/1.1" 200 37528 "" "Prometheus/2.51.0" 2026-03-10T11:36:24.883 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.618+0000 7f2f0757d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:36:24.883 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.650+0000 7f2f0757d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:36:24.883 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.682+0000 7f2f0757d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:36:24.883 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.810+0000 7f2f0757d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:36:24.883 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.846+0000 7f2f0757d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:36:25.141 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:25 vm05 bash[22470]: cluster 2026-03-10T11:36:23.991636+0000 mgr.x (mgr.24733) 162 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:25.141 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.882+0000 7f2f0757d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:36:25.141 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:24 vm05 bash[53899]: debug 2026-03-10T11:36:24.982+0000 7f2f0757d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:36:25.141 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:25 vm05 bash[17453]: cluster 2026-03-10T11:36:23.991636+0000 mgr.x (mgr.24733) 162 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:25 vm07 bash[17804]: cluster 2026-03-10T11:36:23.991636+0000 mgr.x (mgr.24733) 162 : cluster [DBG] pgmap v105: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:25.500 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: debug 2026-03-10T11:36:25.138+0000 7f2f0757d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:36:25.501 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: debug 2026-03-10T11:36:25.298+0000 7f2f0757d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:36:25.501 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: debug 2026-03-10T11:36:25.330+0000 7f2f0757d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:36:25.501 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: debug 2026-03-10T11:36:25.366+0000 7f2f0757d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:36:25.830 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: debug 2026-03-10T11:36:25.498+0000 7f2f0757d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:36:25.830 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: debug 2026-03-10T11:36:25.714+0000 7f2f0757d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:36:25.830 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: [10/Mar/2026:11:36:25] ENGINE Bus STARTING 2026-03-10T11:36:25.830 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: CherryPy Checker: 2026-03-10T11:36:25.830 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: The Application mounted at '' has an empty config. 2026-03-10T11:36:26.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: [10/Mar/2026:11:36:25] ENGINE Serving on http://:::9283 2026-03-10T11:36:26.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:36:25 vm05 bash[53899]: [10/Mar/2026:11:36:25] ENGINE Bus STARTED 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:26 vm05 bash[17453]: cluster 2026-03-10T11:36:25.721859+0000 mon.a (mon.0) 881 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:26 vm05 bash[17453]: cluster 2026-03-10T11:36:25.721934+0000 mon.a (mon.0) 882 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:26 vm05 bash[17453]: audit 2026-03-10T11:36:25.723985+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:26 vm05 bash[17453]: audit 2026-03-10T11:36:25.725046+0000 mon.c (mon.1) 49 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:26 vm05 bash[17453]: audit 2026-03-10T11:36:25.726394+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:26 vm05 bash[17453]: audit 2026-03-10T11:36:25.727378+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:26 vm05 bash[22470]: cluster 2026-03-10T11:36:25.721859+0000 mon.a (mon.0) 881 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:26 vm05 bash[22470]: cluster 2026-03-10T11:36:25.721934+0000 mon.a (mon.0) 882 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:26 vm05 bash[22470]: audit 2026-03-10T11:36:25.723985+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:26 vm05 bash[22470]: audit 2026-03-10T11:36:25.725046+0000 mon.c (mon.1) 49 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:26 vm05 bash[22470]: audit 2026-03-10T11:36:25.726394+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:36:26.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:26 vm05 bash[22470]: audit 2026-03-10T11:36:25.727378+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:36:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:26 vm07 bash[17804]: cluster 2026-03-10T11:36:25.721859+0000 mon.a (mon.0) 881 : cluster [DBG] Standby manager daemon y restarted 2026-03-10T11:36:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:26 vm07 bash[17804]: cluster 2026-03-10T11:36:25.721934+0000 mon.a (mon.0) 882 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:36:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:26 vm07 bash[17804]: audit 2026-03-10T11:36:25.723985+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:36:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:26 vm07 bash[17804]: audit 2026-03-10T11:36:25.725046+0000 mon.c (mon.1) 49 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:36:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:26 vm07 bash[17804]: audit 2026-03-10T11:36:25.726394+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:36:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:26 vm07 bash[17804]: audit 2026-03-10T11:36:25.727378+0000 mon.c (mon.1) 51 : audit [DBG] from='mgr.? 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:36:27.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:27 vm05 bash[22470]: cluster 2026-03-10T11:36:25.992159+0000 mgr.x (mgr.24733) 163 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:27.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:27 vm05 bash[22470]: cluster 2026-03-10T11:36:26.093457+0000 mon.a (mon.0) 883 : cluster [DBG] mgrmap e27: x(active, since 3m), standbys: y 2026-03-10T11:36:27.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:27 vm05 bash[17453]: cluster 2026-03-10T11:36:25.992159+0000 mgr.x (mgr.24733) 163 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:27.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:27 vm05 bash[17453]: cluster 2026-03-10T11:36:26.093457+0000 mon.a (mon.0) 883 : cluster [DBG] mgrmap e27: x(active, since 3m), standbys: y 2026-03-10T11:36:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:27 vm07 bash[17804]: cluster 2026-03-10T11:36:25.992159+0000 mgr.x (mgr.24733) 163 : cluster [DBG] pgmap v106: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:27 vm07 bash[17804]: cluster 2026-03-10T11:36:26.093457+0000 mon.a (mon.0) 883 : cluster [DBG] mgrmap e27: x(active, since 3m), standbys: y 2026-03-10T11:36:27.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:26 vm07 bash[38631]: ts=2026-03-10T11:36:26.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: cluster 2026-03-10T11:36:27.992479+0000 mgr.x (mgr.24733) 164 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: audit 2026-03-10T11:36:28.140947+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: audit 2026-03-10T11:36:28.147052+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: audit 2026-03-10T11:36:28.148036+0000 mon.b (mon.2) 193 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: audit 2026-03-10T11:36:28.148874+0000 mon.b (mon.2) 194 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: audit 2026-03-10T11:36:28.153881+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:29 vm07 bash[17804]: audit 2026-03-10T11:36:28.335570+0000 mgr.x (mgr.24733) 165 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: cluster 2026-03-10T11:36:27.992479+0000 mgr.x (mgr.24733) 164 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: audit 2026-03-10T11:36:28.140947+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: audit 2026-03-10T11:36:28.147052+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: audit 2026-03-10T11:36:28.148036+0000 mon.b (mon.2) 193 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: audit 2026-03-10T11:36:28.148874+0000 mon.b (mon.2) 194 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: audit 2026-03-10T11:36:28.153881+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:29 vm05 bash[17453]: audit 2026-03-10T11:36:28.335570+0000 mgr.x (mgr.24733) 165 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: cluster 2026-03-10T11:36:27.992479+0000 mgr.x (mgr.24733) 164 : cluster [DBG] pgmap v107: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: audit 2026-03-10T11:36:28.140947+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: audit 2026-03-10T11:36:28.147052+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: audit 2026-03-10T11:36:28.148036+0000 mon.b (mon.2) 193 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: audit 2026-03-10T11:36:28.148874+0000 mon.b (mon.2) 194 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: audit 2026-03-10T11:36:28.153881+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:36:29.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:29 vm05 bash[22470]: audit 2026-03-10T11:36:28.335570+0000 mgr.x (mgr.24733) 165 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:30.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:30 vm07 bash[17804]: audit 2026-03-10T11:36:29.156471+0000 mon.b (mon.2) 195 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:30 vm05 bash[22470]: audit 2026-03-10T11:36:29.156471+0000 mon.b (mon.2) 195 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:30.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:30 vm05 bash[17453]: audit 2026-03-10T11:36:29.156471+0000 mon.b (mon.2) 195 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:31.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:31 vm07 bash[17804]: cluster 2026-03-10T11:36:29.992879+0000 mgr.x (mgr.24733) 166 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:31.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:31 vm05 bash[22470]: cluster 2026-03-10T11:36:29.992879+0000 mgr.x (mgr.24733) 166 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:31.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:31 vm05 bash[17453]: cluster 2026-03-10T11:36:29.992879+0000 mgr.x (mgr.24733) 166 : cluster [DBG] pgmap v108: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:33 vm07 bash[17804]: cluster 2026-03-10T11:36:31.993163+0000 mgr.x (mgr.24733) 167 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:33 vm05 bash[22470]: cluster 2026-03-10T11:36:31.993163+0000 mgr.x (mgr.24733) 167 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:33.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:33 vm05 bash[17453]: cluster 2026-03-10T11:36:31.993163+0000 mgr.x (mgr.24733) 167 : cluster [DBG] pgmap v109: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:34.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:34 vm07 bash[38631]: ts=2026-03-10T11:36:34.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:36:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:36:34] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-10T11:36:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:35 vm07 bash[17804]: cluster 2026-03-10T11:36:33.993394+0000 mgr.x (mgr.24733) 168 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:35.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:35 vm05 bash[22470]: cluster 2026-03-10T11:36:33.993394+0000 mgr.x (mgr.24733) 168 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:35.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:35 vm05 bash[17453]: cluster 2026-03-10T11:36:33.993394+0000 mgr.x (mgr.24733) 168 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:37 vm07 bash[17804]: cluster 2026-03-10T11:36:35.993967+0000 mgr.x (mgr.24733) 169 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:37.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:36 vm07 bash[38631]: ts=2026-03-10T11:36:36.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:37.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:37 vm05 bash[22470]: cluster 2026-03-10T11:36:35.993967+0000 mgr.x (mgr.24733) 169 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:37.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:37 vm05 bash[17453]: cluster 2026-03-10T11:36:35.993967+0000 mgr.x (mgr.24733) 169 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:38 vm05 bash[22470]: cluster 2026-03-10T11:36:37.994257+0000 mgr.x (mgr.24733) 170 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:38 vm05 bash[22470]: audit 2026-03-10T11:36:38.346273+0000 mgr.x (mgr.24733) 171 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:39.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:38 vm05 bash[17453]: cluster 2026-03-10T11:36:37.994257+0000 mgr.x (mgr.24733) 170 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:38 vm05 bash[17453]: audit 2026-03-10T11:36:38.346273+0000 mgr.x (mgr.24733) 171 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:38 vm07 bash[17804]: cluster 2026-03-10T11:36:37.994257+0000 mgr.x (mgr.24733) 170 : cluster [DBG] pgmap v112: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:38 vm07 bash[17804]: audit 2026-03-10T11:36:38.346273+0000 mgr.x (mgr.24733) 171 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:41 vm05 bash[22470]: cluster 2026-03-10T11:36:39.994638+0000 mgr.x (mgr.24733) 172 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:41.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:41 vm05 bash[17453]: cluster 2026-03-10T11:36:39.994638+0000 mgr.x (mgr.24733) 172 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:41.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:41 vm07 bash[17804]: cluster 2026-03-10T11:36:39.994638+0000 mgr.x (mgr.24733) 172 : cluster [DBG] pgmap v113: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:43.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:43 vm05 bash[22470]: cluster 2026-03-10T11:36:41.994947+0000 mgr.x (mgr.24733) 173 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:43.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:43 vm05 bash[17453]: cluster 2026-03-10T11:36:41.994947+0000 mgr.x (mgr.24733) 173 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:43 vm07 bash[17804]: cluster 2026-03-10T11:36:41.994947+0000 mgr.x (mgr.24733) 173 : cluster [DBG] pgmap v114: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:44.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:44 vm07 bash[38631]: ts=2026-03-10T11:36:44.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:36:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:36:44] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:36:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:45 vm05 bash[22470]: cluster 2026-03-10T11:36:43.995226+0000 mgr.x (mgr.24733) 174 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:45 vm05 bash[22470]: audit 2026-03-10T11:36:44.154241+0000 mon.b (mon.2) 196 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:45 vm05 bash[17453]: cluster 2026-03-10T11:36:43.995226+0000 mgr.x (mgr.24733) 174 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:45 vm05 bash[17453]: audit 2026-03-10T11:36:44.154241+0000 mon.b (mon.2) 196 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:45 vm07 bash[17804]: cluster 2026-03-10T11:36:43.995226+0000 mgr.x (mgr.24733) 174 : cluster [DBG] pgmap v115: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:45 vm07 bash[17804]: audit 2026-03-10T11:36:44.154241+0000 mon.b (mon.2) 196 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:36:47.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:47 vm05 bash[22470]: cluster 2026-03-10T11:36:45.995721+0000 mgr.x (mgr.24733) 175 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:47.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:47 vm05 bash[17453]: cluster 2026-03-10T11:36:45.995721+0000 mgr.x (mgr.24733) 175 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:47 vm07 bash[17804]: cluster 2026-03-10T11:36:45.995721+0000 mgr.x (mgr.24733) 175 : cluster [DBG] pgmap v116: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:47.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:46 vm07 bash[38631]: ts=2026-03-10T11:36:46.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:48 vm07 bash[17804]: cluster 2026-03-10T11:36:47.996047+0000 mgr.x (mgr.24733) 176 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:48 vm07 bash[17804]: audit 2026-03-10T11:36:48.353159+0000 mgr.x (mgr.24733) 177 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:48 vm05 bash[22470]: cluster 2026-03-10T11:36:47.996047+0000 mgr.x (mgr.24733) 176 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:48 vm05 bash[22470]: audit 2026-03-10T11:36:48.353159+0000 mgr.x (mgr.24733) 177 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:49.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:48 vm05 bash[17453]: cluster 2026-03-10T11:36:47.996047+0000 mgr.x (mgr.24733) 176 : cluster [DBG] pgmap v117: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:49.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:48 vm05 bash[17453]: audit 2026-03-10T11:36:48.353159+0000 mgr.x (mgr.24733) 177 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:51 vm05 bash[22470]: cluster 2026-03-10T11:36:49.996521+0000 mgr.x (mgr.24733) 178 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:51 vm05 bash[17453]: cluster 2026-03-10T11:36:49.996521+0000 mgr.x (mgr.24733) 178 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:51 vm07 bash[17804]: cluster 2026-03-10T11:36:49.996521+0000 mgr.x (mgr.24733) 178 : cluster [DBG] pgmap v118: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:53 vm05 bash[22470]: cluster 2026-03-10T11:36:51.996897+0000 mgr.x (mgr.24733) 179 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:53 vm05 bash[17453]: cluster 2026-03-10T11:36:51.996897+0000 mgr.x (mgr.24733) 179 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:53 vm07 bash[17804]: cluster 2026-03-10T11:36:51.996897+0000 mgr.x (mgr.24733) 179 : cluster [DBG] pgmap v119: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:54.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:54 vm07 bash[38631]: ts=2026-03-10T11:36:54.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:36:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:36:54] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:36:55.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:55 vm05 bash[22470]: cluster 2026-03-10T11:36:53.997166+0000 mgr.x (mgr.24733) 180 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:55.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:55 vm05 bash[17453]: cluster 2026-03-10T11:36:53.997166+0000 mgr.x (mgr.24733) 180 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:55 vm07 bash[17804]: cluster 2026-03-10T11:36:53.997166+0000 mgr.x (mgr.24733) 180 : cluster [DBG] pgmap v120: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:57 vm05 bash[22470]: cluster 2026-03-10T11:36:55.997879+0000 mgr.x (mgr.24733) 181 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:57 vm05 bash[17453]: cluster 2026-03-10T11:36:55.997879+0000 mgr.x (mgr.24733) 181 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:57 vm07 bash[17804]: cluster 2026-03-10T11:36:55.997879+0000 mgr.x (mgr.24733) 181 : cluster [DBG] pgmap v121: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:36:57.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:36:56 vm07 bash[38631]: ts=2026-03-10T11:36:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:36:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:58 vm07 bash[17804]: cluster 2026-03-10T11:36:57.998184+0000 mgr.x (mgr.24733) 182 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:58 vm07 bash[17804]: audit 2026-03-10T11:36:58.357507+0000 mgr.x (mgr.24733) 183 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:58 vm05 bash[22470]: cluster 2026-03-10T11:36:57.998184+0000 mgr.x (mgr.24733) 182 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:58 vm05 bash[22470]: audit 2026-03-10T11:36:58.357507+0000 mgr.x (mgr.24733) 183 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:36:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:58 vm05 bash[17453]: cluster 2026-03-10T11:36:57.998184+0000 mgr.x (mgr.24733) 182 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:36:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:58 vm05 bash[17453]: audit 2026-03-10T11:36:58.357507+0000 mgr.x (mgr.24733) 183 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:36:59 vm07 bash[17804]: audit 2026-03-10T11:36:59.154526+0000 mon.b (mon.2) 197 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:00.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:36:59 vm05 bash[22470]: audit 2026-03-10T11:36:59.154526+0000 mon.b (mon.2) 197 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:00.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:36:59 vm05 bash[17453]: audit 2026-03-10T11:36:59.154526+0000 mon.b (mon.2) 197 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:01.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:00 vm07 bash[17804]: cluster 2026-03-10T11:36:59.998577+0000 mgr.x (mgr.24733) 184 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:00 vm05 bash[22470]: cluster 2026-03-10T11:36:59.998577+0000 mgr.x (mgr.24733) 184 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:00 vm05 bash[17453]: cluster 2026-03-10T11:36:59.998577+0000 mgr.x (mgr.24733) 184 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:03 vm05 bash[22470]: cluster 2026-03-10T11:37:01.998873+0000 mgr.x (mgr.24733) 185 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:03 vm05 bash[17453]: cluster 2026-03-10T11:37:01.998873+0000 mgr.x (mgr.24733) 185 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:03 vm07 bash[17804]: cluster 2026-03-10T11:37:01.998873+0000 mgr.x (mgr.24733) 185 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:04.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:04 vm07 bash[38631]: ts=2026-03-10T11:37:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:37:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:37:04] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-10T11:37:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:05 vm05 bash[22470]: cluster 2026-03-10T11:37:03.999181+0000 mgr.x (mgr.24733) 186 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:05 vm05 bash[17453]: cluster 2026-03-10T11:37:03.999181+0000 mgr.x (mgr.24733) 186 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:05 vm07 bash[17804]: cluster 2026-03-10T11:37:03.999181+0000 mgr.x (mgr.24733) 186 : cluster [DBG] pgmap v125: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:07.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:07 vm05 bash[22470]: cluster 2026-03-10T11:37:05.999667+0000 mgr.x (mgr.24733) 187 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:07.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:07 vm05 bash[17453]: cluster 2026-03-10T11:37:05.999667+0000 mgr.x (mgr.24733) 187 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:07 vm07 bash[17804]: cluster 2026-03-10T11:37:05.999667+0000 mgr.x (mgr.24733) 187 : cluster [DBG] pgmap v126: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:07.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:06 vm07 bash[38631]: ts=2026-03-10T11:37:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:08 vm07 bash[17804]: cluster 2026-03-10T11:37:07.999949+0000 mgr.x (mgr.24733) 188 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:08 vm07 bash[17804]: audit 2026-03-10T11:37:08.364415+0000 mgr.x (mgr.24733) 189 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:08 vm05 bash[22470]: cluster 2026-03-10T11:37:07.999949+0000 mgr.x (mgr.24733) 188 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:08 vm05 bash[22470]: audit 2026-03-10T11:37:08.364415+0000 mgr.x (mgr.24733) 189 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:08 vm05 bash[17453]: cluster 2026-03-10T11:37:07.999949+0000 mgr.x (mgr.24733) 188 : cluster [DBG] pgmap v127: 161 pgs: 161 active+clean; 457 KiB data, 73 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:08 vm05 bash[17453]: audit 2026-03-10T11:37:08.364415+0000 mgr.x (mgr.24733) 189 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:11.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:11 vm05 bash[22470]: cluster 2026-03-10T11:37:10.000347+0000 mgr.x (mgr.24733) 190 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:11.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:11 vm05 bash[17453]: cluster 2026-03-10T11:37:10.000347+0000 mgr.x (mgr.24733) 190 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:11 vm07 bash[17804]: cluster 2026-03-10T11:37:10.000347+0000 mgr.x (mgr.24733) 190 : cluster [DBG] pgmap v128: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:13.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:13 vm05 bash[22470]: cluster 2026-03-10T11:37:12.000654+0000 mgr.x (mgr.24733) 191 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:13.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:13 vm05 bash[17453]: cluster 2026-03-10T11:37:12.000654+0000 mgr.x (mgr.24733) 191 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:13 vm07 bash[17804]: cluster 2026-03-10T11:37:12.000654+0000 mgr.x (mgr.24733) 191 : cluster [DBG] pgmap v129: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:14.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:14 vm07 bash[38631]: ts=2026-03-10T11:37:14.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:37:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:37:14] "GET /metrics HTTP/1.1" 200 37536 "" "Prometheus/2.51.0" 2026-03-10T11:37:15.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:15 vm05 bash[22470]: cluster 2026-03-10T11:37:14.000972+0000 mgr.x (mgr.24733) 192 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:15.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:15 vm05 bash[22470]: audit 2026-03-10T11:37:14.154600+0000 mon.b (mon.2) 198 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:15.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:15 vm05 bash[17453]: cluster 2026-03-10T11:37:14.000972+0000 mgr.x (mgr.24733) 192 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:15.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:15 vm05 bash[17453]: audit 2026-03-10T11:37:14.154600+0000 mon.b (mon.2) 198 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:15 vm07 bash[17804]: cluster 2026-03-10T11:37:14.000972+0000 mgr.x (mgr.24733) 192 : cluster [DBG] pgmap v130: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:15 vm07 bash[17804]: audit 2026-03-10T11:37:14.154600+0000 mon.b (mon.2) 198 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:17 vm05 bash[22470]: cluster 2026-03-10T11:37:16.001498+0000 mgr.x (mgr.24733) 193 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:17.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:17 vm05 bash[17453]: cluster 2026-03-10T11:37:16.001498+0000 mgr.x (mgr.24733) 193 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:17 vm07 bash[17804]: cluster 2026-03-10T11:37:16.001498+0000 mgr.x (mgr.24733) 193 : cluster [DBG] pgmap v131: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:17.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:16 vm07 bash[38631]: ts=2026-03-10T11:37:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:18 vm07 bash[17804]: cluster 2026-03-10T11:37:18.001839+0000 mgr.x (mgr.24733) 194 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:18 vm07 bash[17804]: audit 2026-03-10T11:37:18.371346+0000 mgr.x (mgr.24733) 195 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:19.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:18 vm05 bash[22470]: cluster 2026-03-10T11:37:18.001839+0000 mgr.x (mgr.24733) 194 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:19.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:18 vm05 bash[22470]: audit 2026-03-10T11:37:18.371346+0000 mgr.x (mgr.24733) 195 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:19.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:18 vm05 bash[17453]: cluster 2026-03-10T11:37:18.001839+0000 mgr.x (mgr.24733) 194 : cluster [DBG] pgmap v132: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:37:19.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:18 vm05 bash[17453]: audit 2026-03-10T11:37:18.371346+0000 mgr.x (mgr.24733) 195 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:21.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:21 vm05 bash[22470]: cluster 2026-03-10T11:37:20.002260+0000 mgr.x (mgr.24733) 196 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:21.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:21 vm05 bash[17453]: cluster 2026-03-10T11:37:20.002260+0000 mgr.x (mgr.24733) 196 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:21 vm07 bash[17804]: cluster 2026-03-10T11:37:20.002260+0000 mgr.x (mgr.24733) 196 : cluster [DBG] pgmap v133: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:37:23.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:23 vm05 bash[22470]: cluster 2026-03-10T11:37:22.002653+0000 mgr.x (mgr.24733) 197 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:23.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:23 vm05 bash[17453]: cluster 2026-03-10T11:37:22.002653+0000 mgr.x (mgr.24733) 197 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:23 vm07 bash[17804]: cluster 2026-03-10T11:37:22.002653+0000 mgr.x (mgr.24733) 197 : cluster [DBG] pgmap v134: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:24.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:24 vm07 bash[38631]: ts=2026-03-10T11:37:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:24.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:37:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:37:24] "GET /metrics HTTP/1.1" 200 37536 "" "Prometheus/2.51.0" 2026-03-10T11:37:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:25 vm07 bash[17804]: cluster 2026-03-10T11:37:24.002964+0000 mgr.x (mgr.24733) 198 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:25.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:25 vm05 bash[22470]: cluster 2026-03-10T11:37:24.002964+0000 mgr.x (mgr.24733) 198 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:25.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:25 vm05 bash[17453]: cluster 2026-03-10T11:37:24.002964+0000 mgr.x (mgr.24733) 198 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:27 vm07 bash[17804]: cluster 2026-03-10T11:37:26.003456+0000 mgr.x (mgr.24733) 199 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:27.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:26 vm07 bash[38631]: ts=2026-03-10T11:37:26.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:27 vm05 bash[22470]: cluster 2026-03-10T11:37:26.003456+0000 mgr.x (mgr.24733) 199 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:27 vm05 bash[17453]: cluster 2026-03-10T11:37:26.003456+0000 mgr.x (mgr.24733) 199 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:28 vm07 bash[17804]: cluster 2026-03-10T11:37:28.003803+0000 mgr.x (mgr.24733) 200 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:28 vm07 bash[17804]: audit 2026-03-10T11:37:28.191528+0000 mon.b (mon.2) 199 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:37:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:28 vm07 bash[17804]: audit 2026-03-10T11:37:28.379314+0000 mgr.x (mgr.24733) 201 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:28 vm07 bash[17804]: audit 2026-03-10T11:37:28.483822+0000 mon.b (mon.2) 200 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:37:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:28 vm07 bash[17804]: audit 2026-03-10T11:37:28.484674+0000 mon.b (mon.2) 201 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:37:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:28 vm07 bash[17804]: audit 2026-03-10T11:37:28.493702+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:28 vm05 bash[22470]: cluster 2026-03-10T11:37:28.003803+0000 mgr.x (mgr.24733) 200 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:28 vm05 bash[22470]: audit 2026-03-10T11:37:28.191528+0000 mon.b (mon.2) 199 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:28 vm05 bash[22470]: audit 2026-03-10T11:37:28.379314+0000 mgr.x (mgr.24733) 201 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:28 vm05 bash[22470]: audit 2026-03-10T11:37:28.483822+0000 mon.b (mon.2) 200 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:28 vm05 bash[22470]: audit 2026-03-10T11:37:28.484674+0000 mon.b (mon.2) 201 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:28 vm05 bash[22470]: audit 2026-03-10T11:37:28.493702+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:28 vm05 bash[17453]: cluster 2026-03-10T11:37:28.003803+0000 mgr.x (mgr.24733) 200 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:28 vm05 bash[17453]: audit 2026-03-10T11:37:28.191528+0000 mon.b (mon.2) 199 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:28 vm05 bash[17453]: audit 2026-03-10T11:37:28.379314+0000 mgr.x (mgr.24733) 201 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:28 vm05 bash[17453]: audit 2026-03-10T11:37:28.483822+0000 mon.b (mon.2) 200 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:28 vm05 bash[17453]: audit 2026-03-10T11:37:28.484674+0000 mon.b (mon.2) 201 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:37:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:28 vm05 bash[17453]: audit 2026-03-10T11:37:28.493702+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:37:30.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:29 vm07 bash[17804]: audit 2026-03-10T11:37:29.154772+0000 mon.b (mon.2) 202 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:30.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:29 vm05 bash[22470]: audit 2026-03-10T11:37:29.154772+0000 mon.b (mon.2) 202 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:30.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:29 vm05 bash[17453]: audit 2026-03-10T11:37:29.154772+0000 mon.b (mon.2) 202 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:30 vm07 bash[17804]: cluster 2026-03-10T11:37:30.004386+0000 mgr.x (mgr.24733) 202 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:31.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:30 vm05 bash[22470]: cluster 2026-03-10T11:37:30.004386+0000 mgr.x (mgr.24733) 202 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:31.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:30 vm05 bash[17453]: cluster 2026-03-10T11:37:30.004386+0000 mgr.x (mgr.24733) 202 : cluster [DBG] pgmap v138: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:33 vm05 bash[22470]: cluster 2026-03-10T11:37:32.004674+0000 mgr.x (mgr.24733) 203 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:33.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:33 vm05 bash[17453]: cluster 2026-03-10T11:37:32.004674+0000 mgr.x (mgr.24733) 203 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:33 vm07 bash[17804]: cluster 2026-03-10T11:37:32.004674+0000 mgr.x (mgr.24733) 203 : cluster [DBG] pgmap v139: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:34.409 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:34 vm07 bash[38631]: ts=2026-03-10T11:37:34.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:37:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:37:34] "GET /metrics HTTP/1.1" 200 37534 "" "Prometheus/2.51.0" 2026-03-10T11:37:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:35 vm05 bash[22470]: cluster 2026-03-10T11:37:34.004953+0000 mgr.x (mgr.24733) 204 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:35 vm05 bash[17453]: cluster 2026-03-10T11:37:34.004953+0000 mgr.x (mgr.24733) 204 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:35 vm07 bash[17804]: cluster 2026-03-10T11:37:34.004953+0000 mgr.x (mgr.24733) 204 : cluster [DBG] pgmap v140: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:37 vm05 bash[22470]: cluster 2026-03-10T11:37:36.005434+0000 mgr.x (mgr.24733) 205 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:37.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:37 vm05 bash[17453]: cluster 2026-03-10T11:37:36.005434+0000 mgr.x (mgr.24733) 205 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:37 vm07 bash[17804]: cluster 2026-03-10T11:37:36.005434+0000 mgr.x (mgr.24733) 205 : cluster [DBG] pgmap v141: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:37.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:36 vm07 bash[38631]: ts=2026-03-10T11:37:36.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:38 vm07 bash[17804]: cluster 2026-03-10T11:37:38.005772+0000 mgr.x (mgr.24733) 206 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:38 vm07 bash[17804]: audit 2026-03-10T11:37:38.389018+0000 mgr.x (mgr.24733) 207 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:38 vm05 bash[22470]: cluster 2026-03-10T11:37:38.005772+0000 mgr.x (mgr.24733) 206 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:38 vm05 bash[22470]: audit 2026-03-10T11:37:38.389018+0000 mgr.x (mgr.24733) 207 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:38 vm05 bash[17453]: cluster 2026-03-10T11:37:38.005772+0000 mgr.x (mgr.24733) 206 : cluster [DBG] pgmap v142: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:38 vm05 bash[17453]: audit 2026-03-10T11:37:38.389018+0000 mgr.x (mgr.24733) 207 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:41 vm05 bash[22470]: cluster 2026-03-10T11:37:40.006163+0000 mgr.x (mgr.24733) 208 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:41 vm05 bash[17453]: cluster 2026-03-10T11:37:40.006163+0000 mgr.x (mgr.24733) 208 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:41.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:41 vm07 bash[17804]: cluster 2026-03-10T11:37:40.006163+0000 mgr.x (mgr.24733) 208 : cluster [DBG] pgmap v143: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:43.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:43 vm05 bash[22470]: cluster 2026-03-10T11:37:42.006443+0000 mgr.x (mgr.24733) 209 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:43.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:43 vm05 bash[17453]: cluster 2026-03-10T11:37:42.006443+0000 mgr.x (mgr.24733) 209 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:43 vm07 bash[17804]: cluster 2026-03-10T11:37:42.006443+0000 mgr.x (mgr.24733) 209 : cluster [DBG] pgmap v144: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:44.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:44 vm07 bash[38631]: ts=2026-03-10T11:37:44.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:37:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:37:44] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-10T11:37:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:45 vm05 bash[22470]: cluster 2026-03-10T11:37:44.006750+0000 mgr.x (mgr.24733) 210 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:45 vm05 bash[22470]: audit 2026-03-10T11:37:44.154919+0000 mon.b (mon.2) 203 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:45 vm05 bash[17453]: cluster 2026-03-10T11:37:44.006750+0000 mgr.x (mgr.24733) 210 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:45 vm05 bash[17453]: audit 2026-03-10T11:37:44.154919+0000 mon.b (mon.2) 203 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:45.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:45 vm07 bash[17804]: cluster 2026-03-10T11:37:44.006750+0000 mgr.x (mgr.24733) 210 : cluster [DBG] pgmap v145: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:45 vm07 bash[17804]: audit 2026-03-10T11:37:44.154919+0000 mon.b (mon.2) 203 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:37:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:47 vm07 bash[17804]: cluster 2026-03-10T11:37:46.007187+0000 mgr.x (mgr.24733) 211 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:47.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:46 vm07 bash[38631]: ts=2026-03-10T11:37:46.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:47.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:47 vm05 bash[22470]: cluster 2026-03-10T11:37:46.007187+0000 mgr.x (mgr.24733) 211 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:47.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:47 vm05 bash[17453]: cluster 2026-03-10T11:37:46.007187+0000 mgr.x (mgr.24733) 211 : cluster [DBG] pgmap v146: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:48 vm07 bash[17804]: cluster 2026-03-10T11:37:48.007400+0000 mgr.x (mgr.24733) 212 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:48 vm07 bash[17804]: audit 2026-03-10T11:37:48.396608+0000 mgr.x (mgr.24733) 213 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:48 vm05 bash[22470]: cluster 2026-03-10T11:37:48.007400+0000 mgr.x (mgr.24733) 212 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:48 vm05 bash[22470]: audit 2026-03-10T11:37:48.396608+0000 mgr.x (mgr.24733) 213 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:48 vm05 bash[17453]: cluster 2026-03-10T11:37:48.007400+0000 mgr.x (mgr.24733) 212 : cluster [DBG] pgmap v147: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:48 vm05 bash[17453]: audit 2026-03-10T11:37:48.396608+0000 mgr.x (mgr.24733) 213 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:51 vm05 bash[22470]: cluster 2026-03-10T11:37:50.007779+0000 mgr.x (mgr.24733) 214 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:51 vm05 bash[17453]: cluster 2026-03-10T11:37:50.007779+0000 mgr.x (mgr.24733) 214 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:51 vm07 bash[17804]: cluster 2026-03-10T11:37:50.007779+0000 mgr.x (mgr.24733) 214 : cluster [DBG] pgmap v148: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:53 vm05 bash[22470]: cluster 2026-03-10T11:37:52.008108+0000 mgr.x (mgr.24733) 215 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:53 vm05 bash[17453]: cluster 2026-03-10T11:37:52.008108+0000 mgr.x (mgr.24733) 215 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:53 vm07 bash[17804]: cluster 2026-03-10T11:37:52.008108+0000 mgr.x (mgr.24733) 215 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:54.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:54 vm07 bash[38631]: ts=2026-03-10T11:37:54.146Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:37:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:37:54] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-10T11:37:55.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:55 vm05 bash[22470]: cluster 2026-03-10T11:37:54.008379+0000 mgr.x (mgr.24733) 216 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:55.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:55 vm05 bash[17453]: cluster 2026-03-10T11:37:54.008379+0000 mgr.x (mgr.24733) 216 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:55 vm07 bash[17804]: cluster 2026-03-10T11:37:54.008379+0000 mgr.x (mgr.24733) 216 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:57 vm07 bash[17804]: cluster 2026-03-10T11:37:56.008781+0000 mgr.x (mgr.24733) 217 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:57.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:37:56 vm07 bash[38631]: ts=2026-03-10T11:37:56.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:37:57.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:57 vm05 bash[22470]: cluster 2026-03-10T11:37:56.008781+0000 mgr.x (mgr.24733) 217 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:57.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:57 vm05 bash[17453]: cluster 2026-03-10T11:37:56.008781+0000 mgr.x (mgr.24733) 217 : cluster [DBG] pgmap v151: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:37:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:58 vm07 bash[17804]: cluster 2026-03-10T11:37:58.009110+0000 mgr.x (mgr.24733) 218 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:58 vm07 bash[17804]: audit 2026-03-10T11:37:58.398688+0000 mgr.x (mgr.24733) 219 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:58 vm05 bash[22470]: cluster 2026-03-10T11:37:58.009110+0000 mgr.x (mgr.24733) 218 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:58 vm05 bash[22470]: audit 2026-03-10T11:37:58.398688+0000 mgr.x (mgr.24733) 219 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:37:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:58 vm05 bash[17453]: cluster 2026-03-10T11:37:58.009110+0000 mgr.x (mgr.24733) 218 : cluster [DBG] pgmap v152: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:37:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:58 vm05 bash[17453]: audit 2026-03-10T11:37:58.398688+0000 mgr.x (mgr.24733) 219 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:37:59 vm07 bash[17804]: audit 2026-03-10T11:37:59.155097+0000 mon.b (mon.2) 204 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:00.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:37:59 vm05 bash[22470]: audit 2026-03-10T11:37:59.155097+0000 mon.b (mon.2) 204 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:00.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:37:59 vm05 bash[17453]: audit 2026-03-10T11:37:59.155097+0000 mon.b (mon.2) 204 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:01.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:00 vm07 bash[17804]: cluster 2026-03-10T11:38:00.009559+0000 mgr.x (mgr.24733) 220 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:00 vm05 bash[22470]: cluster 2026-03-10T11:38:00.009559+0000 mgr.x (mgr.24733) 220 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:00 vm05 bash[17453]: cluster 2026-03-10T11:38:00.009559+0000 mgr.x (mgr.24733) 220 : cluster [DBG] pgmap v153: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:03 vm05 bash[22470]: cluster 2026-03-10T11:38:02.009887+0000 mgr.x (mgr.24733) 221 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:03 vm05 bash[17453]: cluster 2026-03-10T11:38:02.009887+0000 mgr.x (mgr.24733) 221 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:03 vm07 bash[17804]: cluster 2026-03-10T11:38:02.009887+0000 mgr.x (mgr.24733) 221 : cluster [DBG] pgmap v154: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:04.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:38:04 vm07 bash[38631]: ts=2026-03-10T11:38:04.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:38:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:38:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:38:04] "GET /metrics HTTP/1.1" 200 37533 "" "Prometheus/2.51.0" 2026-03-10T11:38:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:05 vm05 bash[22470]: cluster 2026-03-10T11:38:04.010188+0000 mgr.x (mgr.24733) 222 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:05 vm05 bash[17453]: cluster 2026-03-10T11:38:04.010188+0000 mgr.x (mgr.24733) 222 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:05 vm07 bash[17804]: cluster 2026-03-10T11:38:04.010188+0000 mgr.x (mgr.24733) 222 : cluster [DBG] pgmap v155: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:07.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:07 vm05 bash[22470]: cluster 2026-03-10T11:38:06.010691+0000 mgr.x (mgr.24733) 223 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:07.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:07 vm05 bash[17453]: cluster 2026-03-10T11:38:06.010691+0000 mgr.x (mgr.24733) 223 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:07 vm07 bash[17804]: cluster 2026-03-10T11:38:06.010691+0000 mgr.x (mgr.24733) 223 : cluster [DBG] pgmap v156: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:07.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:38:06 vm07 bash[38631]: ts=2026-03-10T11:38:06.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:38:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:08 vm07 bash[17804]: cluster 2026-03-10T11:38:08.011002+0000 mgr.x (mgr.24733) 224 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:08 vm07 bash[17804]: audit 2026-03-10T11:38:08.400618+0000 mgr.x (mgr.24733) 225 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:08 vm05 bash[22470]: cluster 2026-03-10T11:38:08.011002+0000 mgr.x (mgr.24733) 224 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:08 vm05 bash[22470]: audit 2026-03-10T11:38:08.400618+0000 mgr.x (mgr.24733) 225 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:08 vm05 bash[17453]: cluster 2026-03-10T11:38:08.011002+0000 mgr.x (mgr.24733) 224 : cluster [DBG] pgmap v157: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:08 vm05 bash[17453]: audit 2026-03-10T11:38:08.400618+0000 mgr.x (mgr.24733) 225 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:11.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:11 vm05 bash[22470]: cluster 2026-03-10T11:38:10.011405+0000 mgr.x (mgr.24733) 226 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:11.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:11 vm05 bash[17453]: cluster 2026-03-10T11:38:10.011405+0000 mgr.x (mgr.24733) 226 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:11 vm07 bash[17804]: cluster 2026-03-10T11:38:10.011405+0000 mgr.x (mgr.24733) 226 : cluster [DBG] pgmap v158: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:13.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:13 vm05 bash[22470]: cluster 2026-03-10T11:38:12.011738+0000 mgr.x (mgr.24733) 227 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:13.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:13 vm05 bash[17453]: cluster 2026-03-10T11:38:12.011738+0000 mgr.x (mgr.24733) 227 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:13 vm07 bash[17804]: cluster 2026-03-10T11:38:12.011738+0000 mgr.x (mgr.24733) 227 : cluster [DBG] pgmap v159: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:14.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:38:14 vm07 bash[38631]: ts=2026-03-10T11:38:14.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:38:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:38:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:38:14] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:38:15.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:15 vm05 bash[22470]: cluster 2026-03-10T11:38:14.012031+0000 mgr.x (mgr.24733) 228 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:15.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:15 vm05 bash[22470]: audit 2026-03-10T11:38:14.155155+0000 mon.b (mon.2) 205 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:15.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:15 vm05 bash[17453]: cluster 2026-03-10T11:38:14.012031+0000 mgr.x (mgr.24733) 228 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:15.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:15 vm05 bash[17453]: audit 2026-03-10T11:38:14.155155+0000 mon.b (mon.2) 205 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:15.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:15 vm07 bash[17804]: cluster 2026-03-10T11:38:14.012031+0000 mgr.x (mgr.24733) 228 : cluster [DBG] pgmap v160: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:15 vm07 bash[17804]: audit 2026-03-10T11:38:14.155155+0000 mon.b (mon.2) 205 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:17 vm05 bash[22470]: cluster 2026-03-10T11:38:16.012404+0000 mgr.x (mgr.24733) 229 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:17.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:17 vm05 bash[17453]: cluster 2026-03-10T11:38:16.012404+0000 mgr.x (mgr.24733) 229 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:17 vm07 bash[17804]: cluster 2026-03-10T11:38:16.012404+0000 mgr.x (mgr.24733) 229 : cluster [DBG] pgmap v161: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:17.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:38:16 vm07 bash[38631]: ts=2026-03-10T11:38:16.947Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:38:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:18 vm07 bash[17804]: cluster 2026-03-10T11:38:18.012702+0000 mgr.x (mgr.24733) 230 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:19.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:18 vm07 bash[17804]: audit 2026-03-10T11:38:18.403022+0000 mgr.x (mgr.24733) 231 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:19.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:18 vm05 bash[22470]: cluster 2026-03-10T11:38:18.012702+0000 mgr.x (mgr.24733) 230 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:19.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:18 vm05 bash[22470]: audit 2026-03-10T11:38:18.403022+0000 mgr.x (mgr.24733) 231 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:19.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:18 vm05 bash[17453]: cluster 2026-03-10T11:38:18.012702+0000 mgr.x (mgr.24733) 230 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:19.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:18 vm05 bash[17453]: audit 2026-03-10T11:38:18.403022+0000 mgr.x (mgr.24733) 231 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:21.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:21 vm05 bash[22470]: cluster 2026-03-10T11:38:20.013204+0000 mgr.x (mgr.24733) 232 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:21.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:21 vm05 bash[17453]: cluster 2026-03-10T11:38:20.013204+0000 mgr.x (mgr.24733) 232 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:21 vm07 bash[17804]: cluster 2026-03-10T11:38:20.013204+0000 mgr.x (mgr.24733) 232 : cluster [DBG] pgmap v163: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:23.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:23 vm05 bash[22470]: cluster 2026-03-10T11:38:22.013508+0000 mgr.x (mgr.24733) 233 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:23.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:23 vm05 bash[17453]: cluster 2026-03-10T11:38:22.013508+0000 mgr.x (mgr.24733) 233 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:23 vm07 bash[17804]: cluster 2026-03-10T11:38:22.013508+0000 mgr.x (mgr.24733) 233 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:24.408 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:38:24 vm07 bash[38631]: ts=2026-03-10T11:38:24.147Z caller=group.go:483 level=warn name=CephOSDFlapping index=13 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=osd msg="Evaluating rule failed" rule="alert: CephOSDFlapping\nexpr: (rate(ceph_osd_up[5m]) * on (ceph_daemon) group_left (hostname) ceph_osd_metadata)\n * 60 > 1\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.4.4\n severity: warning\n type: ceph_default\nannotations:\n description: OSD {{ $labels.ceph_daemon }} on {{ $labels.hostname }} was marked\n down and back up {{ $value | humanize }} times once a minute for 5 minutes. This\n may indicate a network issue (latency, packet loss, MTU mismatch) on the cluster\n network, or the public network if no cluster network is deployed. Check the network\n stats on the listed host(s).\n documentation: https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd#flapping-osds\n summary: Network issues are causing OSDs to flap (mark each other down)\n" err="found duplicate series for the match group {ceph_daemon=\"osd.0\"} on the right hand-side of the operation: [{__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"ceph_cluster\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}, {__name__=\"ceph_osd_metadata\", ceph_daemon=\"osd.0\", ceph_version=\"ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)\", cluster_addr=\"192.168.123.105\", device_class=\"hdd\", hostname=\"vm05\", instance=\"192.168.123.107:9283\", job=\"ceph\", objectstore=\"bluestore\", public_addr=\"192.168.123.105\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:38:24.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:38:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:38:24] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:38:25.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:25 vm05 bash[22470]: cluster 2026-03-10T11:38:24.013820+0000 mgr.x (mgr.24733) 234 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:25.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:25 vm05 bash[17453]: cluster 2026-03-10T11:38:24.013820+0000 mgr.x (mgr.24733) 234 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:25 vm07 bash[17804]: cluster 2026-03-10T11:38:24.013820+0000 mgr.x (mgr.24733) 234 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:27 vm07 bash[17804]: cluster 2026-03-10T11:38:26.014321+0000 mgr.x (mgr.24733) 235 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:27.445 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:38:26 vm07 bash[38631]: ts=2026-03-10T11:38:26.948Z caller=group.go:483 level=warn name=CephNodeDiskspaceWarning index=4 component="rule manager" file=/etc/prometheus/alerting/ceph_alerts.yml group=nodes msg="Evaluating rule failed" rule="alert: CephNodeDiskspaceWarning\nexpr: predict_linear(node_filesystem_free_bytes{device=~\"/.*\"}[2d], 3600 * 24 * 5)\n * on (instance) group_left (nodename) node_uname_info < 0\nlabels:\n oid: 1.3.6.1.4.1.50495.1.2.1.8.4\n severity: warning\n type: ceph_default\nannotations:\n description: Mountpoint {{ $labels.mountpoint }} on {{ $labels.nodename }} will\n be full in less than 5 days based on the 48 hour trailing fill rate.\n summary: Host filesystem free space is getting low\n" err="found duplicate series for the match group {instance=\"vm07\"} on the right hand-side of the operation: [{__name__=\"node_uname_info\", cluster=\"72041074-1c73-11f1-8607-4fca9a5e0a4d\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}, {__name__=\"node_uname_info\", domainname=\"(none)\", instance=\"vm07\", job=\"node\", machine=\"x86_64\", nodename=\"vm07\", release=\"5.15.0-1092-kvm\", sysname=\"Linux\", version=\"#97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026\"}];many-to-many matching not allowed: matching labels must be unique on one side" 2026-03-10T11:38:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:27 vm05 bash[17453]: cluster 2026-03-10T11:38:26.014321+0000 mgr.x (mgr.24733) 235 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:27 vm05 bash[22470]: cluster 2026-03-10T11:38:26.014321+0000 mgr.x (mgr.24733) 235 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:28 vm07 bash[17804]: cluster 2026-03-10T11:38:28.014582+0000 mgr.x (mgr.24733) 236 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:28 vm07 bash[17804]: audit 2026-03-10T11:38:28.413598+0000 mgr.x (mgr.24733) 237 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:28 vm07 bash[17804]: audit 2026-03-10T11:38:28.533959+0000 mon.b (mon.2) 206 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:38:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:28 vm07 bash[17804]: audit 2026-03-10T11:38:28.844631+0000 mon.b (mon.2) 207 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:38:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:28 vm07 bash[17804]: audit 2026-03-10T11:38:28.845244+0000 mon.b (mon.2) 208 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:38:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:28 vm07 bash[17804]: audit 2026-03-10T11:38:28.864298+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:28 vm05 bash[17453]: cluster 2026-03-10T11:38:28.014582+0000 mgr.x (mgr.24733) 236 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:28 vm05 bash[17453]: audit 2026-03-10T11:38:28.413598+0000 mgr.x (mgr.24733) 237 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:28 vm05 bash[17453]: audit 2026-03-10T11:38:28.533959+0000 mon.b (mon.2) 206 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:28 vm05 bash[17453]: audit 2026-03-10T11:38:28.844631+0000 mon.b (mon.2) 207 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:28 vm05 bash[17453]: audit 2026-03-10T11:38:28.845244+0000 mon.b (mon.2) 208 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:28 vm05 bash[17453]: audit 2026-03-10T11:38:28.864298+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:28 vm05 bash[22470]: cluster 2026-03-10T11:38:28.014582+0000 mgr.x (mgr.24733) 236 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:28 vm05 bash[22470]: audit 2026-03-10T11:38:28.413598+0000 mgr.x (mgr.24733) 237 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:28 vm05 bash[22470]: audit 2026-03-10T11:38:28.533959+0000 mon.b (mon.2) 206 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:28 vm05 bash[22470]: audit 2026-03-10T11:38:28.844631+0000 mon.b (mon.2) 207 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:28 vm05 bash[22470]: audit 2026-03-10T11:38:28.845244+0000 mon.b (mon.2) 208 : audit [INF] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:38:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:28 vm05 bash[22470]: audit 2026-03-10T11:38:28.864298+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24733 ' entity='mgr.x' 2026-03-10T11:38:30.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:29 vm07 bash[17804]: audit 2026-03-10T11:38:29.155765+0000 mon.b (mon.2) 209 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:30.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:29 vm05 bash[17453]: audit 2026-03-10T11:38:29.155765+0000 mon.b (mon.2) 209 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:30.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:29 vm05 bash[22470]: audit 2026-03-10T11:38:29.155765+0000 mon.b (mon.2) 209 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:30 vm07 bash[17804]: cluster 2026-03-10T11:38:30.014973+0000 mgr.x (mgr.24733) 238 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:31.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:30 vm05 bash[17453]: cluster 2026-03-10T11:38:30.014973+0000 mgr.x (mgr.24733) 238 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:31.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:30 vm05 bash[22470]: cluster 2026-03-10T11:38:30.014973+0000 mgr.x (mgr.24733) 238 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:33 vm05 bash[22470]: cluster 2026-03-10T11:38:32.015315+0000 mgr.x (mgr.24733) 239 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:33.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:33 vm05 bash[17453]: cluster 2026-03-10T11:38:32.015315+0000 mgr.x (mgr.24733) 239 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:33 vm07 bash[17804]: cluster 2026-03-10T11:38:32.015315+0000 mgr.x (mgr.24733) 239 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:38:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:38:34] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:38:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:35 vm05 bash[22470]: cluster 2026-03-10T11:38:34.015597+0000 mgr.x (mgr.24733) 240 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:35 vm05 bash[17453]: cluster 2026-03-10T11:38:34.015597+0000 mgr.x (mgr.24733) 240 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:35 vm07 bash[17804]: cluster 2026-03-10T11:38:34.015597+0000 mgr.x (mgr.24733) 240 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:37 vm05 bash[22470]: cluster 2026-03-10T11:38:36.016080+0000 mgr.x (mgr.24733) 241 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:37.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:37 vm05 bash[17453]: cluster 2026-03-10T11:38:36.016080+0000 mgr.x (mgr.24733) 241 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:37 vm07 bash[17804]: cluster 2026-03-10T11:38:36.016080+0000 mgr.x (mgr.24733) 241 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:38 vm07 bash[17804]: cluster 2026-03-10T11:38:38.016408+0000 mgr.x (mgr.24733) 242 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:38 vm07 bash[17804]: audit 2026-03-10T11:38:38.424256+0000 mgr.x (mgr.24733) 243 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:38 vm05 bash[22470]: cluster 2026-03-10T11:38:38.016408+0000 mgr.x (mgr.24733) 242 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:38 vm05 bash[22470]: audit 2026-03-10T11:38:38.424256+0000 mgr.x (mgr.24733) 243 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:38 vm05 bash[17453]: cluster 2026-03-10T11:38:38.016408+0000 mgr.x (mgr.24733) 242 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:38 vm05 bash[17453]: audit 2026-03-10T11:38:38.424256+0000 mgr.x (mgr.24733) 243 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:41 vm05 bash[22470]: cluster 2026-03-10T11:38:40.016818+0000 mgr.x (mgr.24733) 244 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:41 vm05 bash[17453]: cluster 2026-03-10T11:38:40.016818+0000 mgr.x (mgr.24733) 244 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:41.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:41 vm07 bash[17804]: cluster 2026-03-10T11:38:40.016818+0000 mgr.x (mgr.24733) 244 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:43.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:43 vm05 bash[22470]: cluster 2026-03-10T11:38:42.017196+0000 mgr.x (mgr.24733) 245 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:43.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:43 vm05 bash[17453]: cluster 2026-03-10T11:38:42.017196+0000 mgr.x (mgr.24733) 245 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:43 vm07 bash[17804]: cluster 2026-03-10T11:38:42.017196+0000 mgr.x (mgr.24733) 245 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:38:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:38:44] "GET /metrics HTTP/1.1" 200 37533 "" "Prometheus/2.51.0" 2026-03-10T11:38:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:45 vm05 bash[22470]: cluster 2026-03-10T11:38:44.017516+0000 mgr.x (mgr.24733) 246 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:45 vm05 bash[22470]: audit 2026-03-10T11:38:44.155719+0000 mon.b (mon.2) 210 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:45 vm05 bash[17453]: cluster 2026-03-10T11:38:44.017516+0000 mgr.x (mgr.24733) 246 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:45 vm05 bash[17453]: audit 2026-03-10T11:38:44.155719+0000 mon.b (mon.2) 210 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:45 vm07 bash[17804]: cluster 2026-03-10T11:38:44.017516+0000 mgr.x (mgr.24733) 246 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:45 vm07 bash[17804]: audit 2026-03-10T11:38:44.155719+0000 mon.b (mon.2) 210 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:38:47.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:47 vm05 bash[22470]: cluster 2026-03-10T11:38:46.018094+0000 mgr.x (mgr.24733) 247 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:47.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:47 vm05 bash[17453]: cluster 2026-03-10T11:38:46.018094+0000 mgr.x (mgr.24733) 247 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:47 vm07 bash[17804]: cluster 2026-03-10T11:38:46.018094+0000 mgr.x (mgr.24733) 247 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:48 vm07 bash[17804]: cluster 2026-03-10T11:38:48.018397+0000 mgr.x (mgr.24733) 248 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:48 vm07 bash[17804]: audit 2026-03-10T11:38:48.435021+0000 mgr.x (mgr.24733) 249 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:48 vm05 bash[22470]: cluster 2026-03-10T11:38:48.018397+0000 mgr.x (mgr.24733) 248 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:48 vm05 bash[22470]: audit 2026-03-10T11:38:48.435021+0000 mgr.x (mgr.24733) 249 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:48 vm05 bash[17453]: cluster 2026-03-10T11:38:48.018397+0000 mgr.x (mgr.24733) 248 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:48 vm05 bash[17453]: audit 2026-03-10T11:38:48.435021+0000 mgr.x (mgr.24733) 249 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:51 vm05 bash[22470]: cluster 2026-03-10T11:38:50.018839+0000 mgr.x (mgr.24733) 250 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:51 vm05 bash[17453]: cluster 2026-03-10T11:38:50.018839+0000 mgr.x (mgr.24733) 250 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:51 vm07 bash[17804]: cluster 2026-03-10T11:38:50.018839+0000 mgr.x (mgr.24733) 250 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:53 vm05 bash[17453]: cluster 2026-03-10T11:38:52.019136+0000 mgr.x (mgr.24733) 251 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:53 vm05 bash[22470]: cluster 2026-03-10T11:38:52.019136+0000 mgr.x (mgr.24733) 251 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:53 vm07 bash[17804]: cluster 2026-03-10T11:38:52.019136+0000 mgr.x (mgr.24733) 251 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:38:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:38:54] "GET /metrics HTTP/1.1" 200 37533 "" "Prometheus/2.51.0" 2026-03-10T11:38:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:55 vm07 bash[17804]: cluster 2026-03-10T11:38:54.019397+0000 mgr.x (mgr.24733) 252 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:55 vm05 bash[22470]: cluster 2026-03-10T11:38:54.019397+0000 mgr.x (mgr.24733) 252 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:55 vm05 bash[17453]: cluster 2026-03-10T11:38:54.019397+0000 mgr.x (mgr.24733) 252 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:57 vm07 bash[17804]: cluster 2026-03-10T11:38:56.019965+0000 mgr.x (mgr.24733) 253 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:57.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:57 vm05 bash[22470]: cluster 2026-03-10T11:38:56.019965+0000 mgr.x (mgr.24733) 253 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:57.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:57 vm05 bash[17453]: cluster 2026-03-10T11:38:56.019965+0000 mgr.x (mgr.24733) 253 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:38:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:58 vm07 bash[17804]: cluster 2026-03-10T11:38:58.020290+0000 mgr.x (mgr.24733) 254 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:59.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:58 vm07 bash[17804]: audit 2026-03-10T11:38:58.443588+0000 mgr.x (mgr.24733) 255 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:58 vm05 bash[22470]: cluster 2026-03-10T11:38:58.020290+0000 mgr.x (mgr.24733) 254 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:58 vm05 bash[22470]: audit 2026-03-10T11:38:58.443588+0000 mgr.x (mgr.24733) 255 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:38:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:58 vm05 bash[17453]: cluster 2026-03-10T11:38:58.020290+0000 mgr.x (mgr.24733) 254 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:38:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:58 vm05 bash[17453]: audit 2026-03-10T11:38:58.443588+0000 mgr.x (mgr.24733) 255 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:38:59 vm07 bash[17804]: audit 2026-03-10T11:38:59.156001+0000 mon.b (mon.2) 211 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:00.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:38:59 vm05 bash[22470]: audit 2026-03-10T11:38:59.156001+0000 mon.b (mon.2) 211 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:00.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:38:59 vm05 bash[17453]: audit 2026-03-10T11:38:59.156001+0000 mon.b (mon.2) 211 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:01.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:00 vm07 bash[17804]: cluster 2026-03-10T11:39:00.020749+0000 mgr.x (mgr.24733) 256 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:00 vm05 bash[22470]: cluster 2026-03-10T11:39:00.020749+0000 mgr.x (mgr.24733) 256 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:00 vm05 bash[17453]: cluster 2026-03-10T11:39:00.020749+0000 mgr.x (mgr.24733) 256 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:03 vm05 bash[22470]: cluster 2026-03-10T11:39:02.021092+0000 mgr.x (mgr.24733) 257 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:03 vm05 bash[17453]: cluster 2026-03-10T11:39:02.021092+0000 mgr.x (mgr.24733) 257 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:03 vm07 bash[17804]: cluster 2026-03-10T11:39:02.021092+0000 mgr.x (mgr.24733) 257 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:39:04] "GET /metrics HTTP/1.1" 200 37531 "" "Prometheus/2.51.0" 2026-03-10T11:39:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:05 vm05 bash[22470]: cluster 2026-03-10T11:39:04.021438+0000 mgr.x (mgr.24733) 258 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:05 vm05 bash[17453]: cluster 2026-03-10T11:39:04.021438+0000 mgr.x (mgr.24733) 258 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:05 vm07 bash[17804]: cluster 2026-03-10T11:39:04.021438+0000 mgr.x (mgr.24733) 258 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:07.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:07 vm05 bash[17453]: cluster 2026-03-10T11:39:06.021933+0000 mgr.x (mgr.24733) 259 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:07.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:07 vm05 bash[22470]: cluster 2026-03-10T11:39:06.021933+0000 mgr.x (mgr.24733) 259 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:07 vm07 bash[17804]: cluster 2026-03-10T11:39:06.021933+0000 mgr.x (mgr.24733) 259 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:08 vm07 bash[17804]: cluster 2026-03-10T11:39:08.022248+0000 mgr.x (mgr.24733) 260 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:09.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:08 vm07 bash[17804]: audit 2026-03-10T11:39:08.454257+0000 mgr.x (mgr.24733) 261 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:08 vm05 bash[17453]: cluster 2026-03-10T11:39:08.022248+0000 mgr.x (mgr.24733) 260 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:08 vm05 bash[17453]: audit 2026-03-10T11:39:08.454257+0000 mgr.x (mgr.24733) 261 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:08 vm05 bash[22470]: cluster 2026-03-10T11:39:08.022248+0000 mgr.x (mgr.24733) 260 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:08 vm05 bash[22470]: audit 2026-03-10T11:39:08.454257+0000 mgr.x (mgr.24733) 261 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:11.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:11 vm05 bash[22470]: cluster 2026-03-10T11:39:10.022649+0000 mgr.x (mgr.24733) 262 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:11.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:11 vm05 bash[17453]: cluster 2026-03-10T11:39:10.022649+0000 mgr.x (mgr.24733) 262 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:11.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:11 vm07 bash[17804]: cluster 2026-03-10T11:39:10.022649+0000 mgr.x (mgr.24733) 262 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:13.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:13 vm05 bash[22470]: cluster 2026-03-10T11:39:12.022959+0000 mgr.x (mgr.24733) 263 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:13.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:13 vm05 bash[17453]: cluster 2026-03-10T11:39:12.022959+0000 mgr.x (mgr.24733) 263 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:13 vm07 bash[17804]: cluster 2026-03-10T11:39:12.022959+0000 mgr.x (mgr.24733) 263 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:39:14] "GET /metrics HTTP/1.1" 200 37532 "" "Prometheus/2.51.0" 2026-03-10T11:39:15.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:15 vm05 bash[17453]: cluster 2026-03-10T11:39:14.023227+0000 mgr.x (mgr.24733) 264 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:15.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:15 vm05 bash[17453]: audit 2026-03-10T11:39:14.156129+0000 mon.b (mon.2) 212 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:15.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:15 vm05 bash[22470]: cluster 2026-03-10T11:39:14.023227+0000 mgr.x (mgr.24733) 264 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:15.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:15 vm05 bash[22470]: audit 2026-03-10T11:39:14.156129+0000 mon.b (mon.2) 212 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:15.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:15 vm07 bash[17804]: cluster 2026-03-10T11:39:14.023227+0000 mgr.x (mgr.24733) 264 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:15 vm07 bash[17804]: audit 2026-03-10T11:39:14.156129+0000 mon.b (mon.2) 212 : audit [DBG] from='mgr.24733 192.168.123.107:0/917553345' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:15.926 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:39:16.340 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:39:16.340 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (5m) 2m ago 12m 13.8M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:39:16.340 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (5m) 2m ago 12m 39.1M - dad864ee21e9 ea7bd1695c30 2026-03-10T11:39:16.340 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (5m) 2m ago 12m 42.2M - 3.5 e1d6a67b021e 71be9fb90a88 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283 running (7m) 2m ago 15m 530M - 19.2.3-678-ge911bdeb 654f31e6858e 29cf7638c524 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (2m) 2m ago 16m 355M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (16m) 2m ago 16m 55.0M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (15m) 2m ago 15m 42.5M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (15m) 2m ago 15m 37.9M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (5m) 2m ago 12m 7564k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (5m) 2m ago 12m 7583k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (15m) 2m ago 15m 50.4M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (14m) 2m ago 14m 53.4M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (14m) 2m ago 14m 49.5M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (14m) 2m ago 14m 50.8M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (14m) 2m ago 14m 50.6M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (13m) 2m ago 13m 48.4M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (13m) 2m ago 13m 47.0M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (13m) 2m ago 13m 49.2M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (5m) 2m ago 12m 38.5M - 2.51.0 1d3b7f56885b 42d6386fa908 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (12m) 2m ago 12m 83.9M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:39:16.341 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (12m) 2m ago 12m 84.4M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:39:16.386 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:39:16.816 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:39:16.817 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:39:16.886 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:39:17.144 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:17 vm05 bash[22470]: cluster 2026-03-10T11:39:16.023721+0000 mgr.x (mgr.24733) 265 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:17.144 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:17 vm05 bash[22470]: audit 2026-03-10T11:39:16.338181+0000 mgr.x (mgr.24733) 266 : audit [DBG] from='client.24865 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:39:17.144 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:17 vm05 bash[22470]: audit 2026-03-10T11:39:16.818859+0000 mon.c (mon.1) 52 : audit [DBG] from='client.? 192.168.123.105:0/2108197014' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:39:17.145 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:17 vm05 bash[17453]: cluster 2026-03-10T11:39:16.023721+0000 mgr.x (mgr.24733) 265 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:17.145 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:17 vm05 bash[17453]: audit 2026-03-10T11:39:16.338181+0000 mgr.x (mgr.24733) 266 : audit [DBG] from='client.24865 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:39:17.145 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:17 vm05 bash[17453]: audit 2026-03-10T11:39:16.818859+0000 mon.c (mon.1) 52 : audit [DBG] from='client.? 192.168.123.105:0/2108197014' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:39:17.374 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:39:17.442 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-10T11:39:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:17 vm07 bash[17804]: cluster 2026-03-10T11:39:16.023721+0000 mgr.x (mgr.24733) 265 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:17 vm07 bash[17804]: audit 2026-03-10T11:39:16.338181+0000 mgr.x (mgr.24733) 266 : audit [DBG] from='client.24865 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:39:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:17 vm07 bash[17804]: audit 2026-03-10T11:39:16.818859+0000 mon.c (mon.1) 52 : audit [DBG] from='client.? 192.168.123.105:0/2108197014' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: cluster: 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: id: 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: health: HEALTH_OK 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: services: 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: mon: 3 daemons, quorum a,c,b (age 15m) 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: mgr: x(active, since 6m), standbys: y 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: osd: 8 osds: 8 up (since 13m), 8 in (since 13m) 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-10T11:39:17.900 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: data: 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: pools: 6 pools, 161 pgs 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: objects: 209 objects, 457 KiB 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: usage: 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: pgs: 161 active+clean 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: io: 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: client: 1.2 KiB/s rd, 1 op/s rd, 0 op/s wr 2026-03-10T11:39:17.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:39:17.947 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph mgr fail' 2026-03-10T11:39:18.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:18 vm05 bash[22470]: audit 2026-03-10T11:39:17.376809+0000 mon.a (mon.0) 889 : audit [DBG] from='client.? 192.168.123.105:0/434360478' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:39:18.169 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:18 vm05 bash[22470]: audit 2026-03-10T11:39:17.902962+0000 mon.a (mon.0) 890 : audit [DBG] from='client.? 192.168.123.105:0/1525746279' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:39:18.169 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:18 vm05 bash[17453]: audit 2026-03-10T11:39:17.376809+0000 mon.a (mon.0) 889 : audit [DBG] from='client.? 192.168.123.105:0/434360478' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:39:18.169 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:18 vm05 bash[17453]: audit 2026-03-10T11:39:17.902962+0000 mon.a (mon.0) 890 : audit [DBG] from='client.? 192.168.123.105:0/1525746279' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:39:18.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:18 vm07 bash[17804]: audit 2026-03-10T11:39:17.376809+0000 mon.a (mon.0) 889 : audit [DBG] from='client.? 192.168.123.105:0/434360478' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:39:18.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:18 vm07 bash[17804]: audit 2026-03-10T11:39:17.902962+0000 mon.a (mon.0) 890 : audit [DBG] from='client.? 192.168.123.105:0/1525746279' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:39:19.179 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: ignoring --setuser ceph since I am not root 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: ignoring --setgroup ceph since I am not root 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: debug 2026-03-10T11:39:19.162+0000 7f7a2710c640 1 -- 192.168.123.107:0/2158235623 <== mon.2 v2:192.168.123.107:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x556fce7144e0 con 0x556fce6f2800 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: debug 2026-03-10T11:39:19.162+0000 7f7a2710c640 1 -- 192.168.123.107:0/2158235623 <== mon.2 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x556fce6f14a0 con 0x556fce6f2800 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: debug 2026-03-10T11:39:19.218+0000 7f7a29969140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: debug 2026-03-10T11:39:19.250+0000 7f7a29969140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:19 vm07 bash[17804]: cluster 2026-03-10T11:39:18.024061+0000 mgr.x (mgr.24733) 267 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:19 vm07 bash[17804]: audit 2026-03-10T11:39:18.376451+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.105:0/2286979336' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:19 vm07 bash[17804]: cluster 2026-03-10T11:39:18.384866+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:19 vm07 bash[17804]: audit 2026-03-10T11:39:18.460461+0000 mgr.x (mgr.24733) 268 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:19.363 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:19 vm07 bash[17804]: cluster 2026-03-10T11:39:18.913250+0000 mon.a (mon.0) 893 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:19 vm05 bash[22470]: cluster 2026-03-10T11:39:18.024061+0000 mgr.x (mgr.24733) 267 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:19 vm05 bash[22470]: audit 2026-03-10T11:39:18.376451+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.105:0/2286979336' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:19 vm05 bash[22470]: cluster 2026-03-10T11:39:18.384866+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:19 vm05 bash[22470]: audit 2026-03-10T11:39:18.460461+0000 mgr.x (mgr.24733) 268 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:19 vm05 bash[22470]: cluster 2026-03-10T11:39:18.913250+0000 mon.a (mon.0) 893 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:19 vm05 bash[17453]: cluster 2026-03-10T11:39:18.024061+0000 mgr.x (mgr.24733) 267 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:19 vm05 bash[17453]: audit 2026-03-10T11:39:18.376451+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.105:0/2286979336' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:19 vm05 bash[17453]: cluster 2026-03-10T11:39:18.384866+0000 mon.a (mon.0) 892 : cluster [DBG] osdmap e90: 8 total, 8 up, 8 in 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:19 vm05 bash[17453]: audit 2026-03-10T11:39:18.460461+0000 mgr.x (mgr.24733) 268 : audit [DBG] from='client.14901 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:19 vm05 bash[17453]: cluster 2026-03-10T11:39:18.913250+0000 mon.a (mon.0) 893 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:39:19.398 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:19 vm05 bash[53899]: [10/Mar/2026:11:39:19] ENGINE Bus STOPPING 2026-03-10T11:39:19.629 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: debug 2026-03-10T11:39:19.358+0000 7f7a29969140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:39:19.676 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:19 vm05 bash[53899]: [10/Mar/2026:11:39:19] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:39:19.676 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:19 vm05 bash[53899]: [10/Mar/2026:11:39:19] ENGINE Bus STOPPED 2026-03-10T11:39:19.676 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:19 vm05 bash[53899]: [10/Mar/2026:11:39:19] ENGINE Bus STARTING 2026-03-10T11:39:19.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:19 vm07 bash[36672]: debug 2026-03-10T11:39:19.626+0000 7f7a29969140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:39:20.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:19 vm05 bash[53899]: [10/Mar/2026:11:39:19] ENGINE Serving on http://:::9283 2026-03-10T11:39:20.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:19 vm05 bash[53899]: [10/Mar/2026:11:39:19] ENGINE Bus STARTED 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.066+0000 7f7a29969140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.158+0000 7f7a29969140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: from numpy import show_config as show_numpy_config 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.282+0000 7f7a29969140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.115595+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.105:0/2286979336' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: cluster 2026-03-10T11:39:19.115752+0000 mon.a (mon.0) 895 : cluster [DBG] mgrmap e28: y(active, starting, since 0.737564s), standbys: x 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137200+0000 mon.a (mon.0) 896 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137293+0000 mon.a (mon.0) 897 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137344+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137399+0000 mon.a (mon.0) 899 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137474+0000 mon.a (mon.0) 900 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137530+0000 mon.a (mon.0) 901 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137620+0000 mon.a (mon.0) 902 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137691+0000 mon.a (mon.0) 903 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137768+0000 mon.a (mon.0) 904 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137836+0000 mon.a (mon.0) 905 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137918+0000 mon.a (mon.0) 906 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.137987+0000 mon.a (mon.0) 907 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.138057+0000 mon.a (mon.0) 908 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.138134+0000 mon.a (mon.0) 909 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.138186+0000 mon.a (mon.0) 910 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.138386+0000 mon.a (mon.0) 911 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: cluster 2026-03-10T11:39:19.521443+0000 mon.a (mon.0) 912 : cluster [INF] Manager daemon y is now available 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.556023+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.556373+0000 mon.a (mon.0) 914 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.564650+0000 mon.a (mon.0) 915 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:20.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:20 vm07 bash[17804]: audit 2026-03-10T11:39:19.604135+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:39:20.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.115595+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.105:0/2286979336' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:39:20.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: cluster 2026-03-10T11:39:19.115752+0000 mon.a (mon.0) 895 : cluster [DBG] mgrmap e28: y(active, starting, since 0.737564s), standbys: x 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137200+0000 mon.a (mon.0) 896 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137293+0000 mon.a (mon.0) 897 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137344+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137399+0000 mon.a (mon.0) 899 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137474+0000 mon.a (mon.0) 900 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137530+0000 mon.a (mon.0) 901 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137620+0000 mon.a (mon.0) 902 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137691+0000 mon.a (mon.0) 903 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137768+0000 mon.a (mon.0) 904 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137836+0000 mon.a (mon.0) 905 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137918+0000 mon.a (mon.0) 906 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.137987+0000 mon.a (mon.0) 907 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.138057+0000 mon.a (mon.0) 908 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.138134+0000 mon.a (mon.0) 909 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.138186+0000 mon.a (mon.0) 910 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.138386+0000 mon.a (mon.0) 911 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: cluster 2026-03-10T11:39:19.521443+0000 mon.a (mon.0) 912 : cluster [INF] Manager daemon y is now available 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.556023+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.556373+0000 mon.a (mon.0) 914 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.564650+0000 mon.a (mon.0) 915 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:20 vm05 bash[22470]: audit 2026-03-10T11:39:19.604135+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.115595+0000 mon.a (mon.0) 894 : audit [INF] from='client.? 192.168.123.105:0/2286979336' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: cluster 2026-03-10T11:39:19.115752+0000 mon.a (mon.0) 895 : cluster [DBG] mgrmap e28: y(active, starting, since 0.737564s), standbys: x 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137200+0000 mon.a (mon.0) 896 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137293+0000 mon.a (mon.0) 897 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137344+0000 mon.a (mon.0) 898 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137399+0000 mon.a (mon.0) 899 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137474+0000 mon.a (mon.0) 900 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137530+0000 mon.a (mon.0) 901 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137620+0000 mon.a (mon.0) 902 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137691+0000 mon.a (mon.0) 903 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137768+0000 mon.a (mon.0) 904 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137836+0000 mon.a (mon.0) 905 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137918+0000 mon.a (mon.0) 906 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.137987+0000 mon.a (mon.0) 907 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.138057+0000 mon.a (mon.0) 908 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.138134+0000 mon.a (mon.0) 909 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.138186+0000 mon.a (mon.0) 910 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.138386+0000 mon.a (mon.0) 911 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: cluster 2026-03-10T11:39:19.521443+0000 mon.a (mon.0) 912 : cluster [INF] Manager daemon y is now available 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.556023+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.556373+0000 mon.a (mon.0) 914 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.564650+0000 mon.a (mon.0) 915 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:20.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:20 vm05 bash[17453]: audit 2026-03-10T11:39:19.604135+0000 mon.a (mon.0) 916 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:39:20.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.410+0000 7f7a29969140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:39:20.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.446+0000 7f7a29969140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:39:20.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.482+0000 7f7a29969140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:39:20.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.522+0000 7f7a29969140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:39:20.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.566+0000 7f7a29969140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.950+0000 7f7a29969140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:20 vm07 bash[36672]: debug 2026-03-10T11:39:20.982+0000 7f7a29969140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.018+0000 7f7a29969140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.150+0000 7f7a29969140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.186+0000 7f7a29969140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:21 vm07 bash[17804]: cluster 2026-03-10T11:39:20.138988+0000 mon.a (mon.0) 917 : cluster [DBG] mgrmap e29: y(active, since 1.76081s), standbys: x 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:21 vm07 bash[17804]: cephadm 2026-03-10T11:39:20.683070+0000 mgr.y (mgr.24859) 2 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Bus STARTING 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:21 vm07 bash[17804]: cephadm 2026-03-10T11:39:20.784388+0000 mgr.y (mgr.24859) 3 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:21 vm07 bash[17804]: cephadm 2026-03-10T11:39:20.894199+0000 mgr.y (mgr.24859) 4 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:21 vm07 bash[17804]: cephadm 2026-03-10T11:39:20.894234+0000 mgr.y (mgr.24859) 5 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Bus STARTED 2026-03-10T11:39:21.225 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:21 vm07 bash[17804]: cephadm 2026-03-10T11:39:20.894562+0000 mgr.y (mgr.24859) 6 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Client ('192.168.123.105', 59638) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:39:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:21 vm05 bash[22470]: cluster 2026-03-10T11:39:20.138988+0000 mon.a (mon.0) 917 : cluster [DBG] mgrmap e29: y(active, since 1.76081s), standbys: x 2026-03-10T11:39:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:21 vm05 bash[22470]: cephadm 2026-03-10T11:39:20.683070+0000 mgr.y (mgr.24859) 2 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Bus STARTING 2026-03-10T11:39:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:21 vm05 bash[22470]: cephadm 2026-03-10T11:39:20.784388+0000 mgr.y (mgr.24859) 3 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:39:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:21 vm05 bash[22470]: cephadm 2026-03-10T11:39:20.894199+0000 mgr.y (mgr.24859) 4 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:39:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:21 vm05 bash[22470]: cephadm 2026-03-10T11:39:20.894234+0000 mgr.y (mgr.24859) 5 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Bus STARTED 2026-03-10T11:39:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:21 vm05 bash[22470]: cephadm 2026-03-10T11:39:20.894562+0000 mgr.y (mgr.24859) 6 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Client ('192.168.123.105', 59638) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:39:21.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:21 vm05 bash[17453]: cluster 2026-03-10T11:39:20.138988+0000 mon.a (mon.0) 917 : cluster [DBG] mgrmap e29: y(active, since 1.76081s), standbys: x 2026-03-10T11:39:21.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:21 vm05 bash[17453]: cephadm 2026-03-10T11:39:20.683070+0000 mgr.y (mgr.24859) 2 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Bus STARTING 2026-03-10T11:39:21.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:21 vm05 bash[17453]: cephadm 2026-03-10T11:39:20.784388+0000 mgr.y (mgr.24859) 3 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:39:21.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:21 vm05 bash[17453]: cephadm 2026-03-10T11:39:20.894199+0000 mgr.y (mgr.24859) 4 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:39:21.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:21 vm05 bash[17453]: cephadm 2026-03-10T11:39:20.894234+0000 mgr.y (mgr.24859) 5 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Bus STARTED 2026-03-10T11:39:21.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:21 vm05 bash[17453]: cephadm 2026-03-10T11:39:20.894562+0000 mgr.y (mgr.24859) 6 : cephadm [INF] [10/Mar/2026:11:39:20] ENGINE Client ('192.168.123.105', 59638) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:39:21.626 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.222+0000 7f7a29969140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:39:21.626 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.326+0000 7f7a29969140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:39:21.626 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.462+0000 7f7a29969140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:39:21.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.622+0000 7f7a29969140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:39:21.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.654+0000 7f7a29969140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:39:21.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.690+0000 7f7a29969140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:39:21.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:21 vm07 bash[36672]: debug 2026-03-10T11:39:21.826+0000 7f7a29969140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:22 vm07 bash[36672]: debug 2026-03-10T11:39:22.030+0000 7f7a29969140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:22 vm07 bash[36672]: [10/Mar/2026:11:39:22] ENGINE Bus STARTING 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:22 vm07 bash[36672]: CherryPy Checker: 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:22 vm07 bash[36672]: The Application mounted at '' has an empty config. 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:22 vm07 bash[36672]: [10/Mar/2026:11:39:22] ENGINE Serving on http://:::9283 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:22 vm07 bash[36672]: [10/Mar/2026:11:39:22] ENGINE Bus STARTED 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: cluster 2026-03-10T11:39:21.129233+0000 mgr.y (mgr.24859) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: cluster 2026-03-10T11:39:22.040026+0000 mon.a (mon.0) 918 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: cluster 2026-03-10T11:39:22.040161+0000 mon.a (mon.0) 919 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: audit 2026-03-10T11:39:22.041238+0000 mon.b (mon.2) 213 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: audit 2026-03-10T11:39:22.045557+0000 mon.b (mon.2) 214 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: audit 2026-03-10T11:39:22.046375+0000 mon.b (mon.2) 215 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:39:22.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:22 vm07 bash[17804]: audit 2026-03-10T11:39:22.046831+0000 mon.b (mon.2) 216 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: cluster 2026-03-10T11:39:21.129233+0000 mgr.y (mgr.24859) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: cluster 2026-03-10T11:39:22.040026+0000 mon.a (mon.0) 918 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: cluster 2026-03-10T11:39:22.040161+0000 mon.a (mon.0) 919 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: audit 2026-03-10T11:39:22.041238+0000 mon.b (mon.2) 213 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: audit 2026-03-10T11:39:22.045557+0000 mon.b (mon.2) 214 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: audit 2026-03-10T11:39:22.046375+0000 mon.b (mon.2) 215 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:22 vm05 bash[22470]: audit 2026-03-10T11:39:22.046831+0000 mon.b (mon.2) 216 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: cluster 2026-03-10T11:39:21.129233+0000 mgr.y (mgr.24859) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: cluster 2026-03-10T11:39:22.040026+0000 mon.a (mon.0) 918 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: cluster 2026-03-10T11:39:22.040161+0000 mon.a (mon.0) 919 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: audit 2026-03-10T11:39:22.041238+0000 mon.b (mon.2) 213 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: audit 2026-03-10T11:39:22.045557+0000 mon.b (mon.2) 214 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:39:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: audit 2026-03-10T11:39:22.046375+0000 mon.b (mon.2) 215 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:39:22.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:22 vm05 bash[17453]: audit 2026-03-10T11:39:22.046831+0000 mon.b (mon.2) 216 : audit [DBG] from='mgr.? 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:39:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:23 vm07 bash[17804]: cluster 2026-03-10T11:39:22.160028+0000 mon.a (mon.0) 920 : cluster [DBG] mgrmap e30: y(active, since 3s), standbys: x 2026-03-10T11:39:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:23 vm05 bash[22470]: cluster 2026-03-10T11:39:22.160028+0000 mon.a (mon.0) 920 : cluster [DBG] mgrmap e30: y(active, since 3s), standbys: x 2026-03-10T11:39:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:23 vm05 bash[17453]: cluster 2026-03-10T11:39:22.160028+0000 mon.a (mon.0) 920 : cluster [DBG] mgrmap e30: y(active, since 3s), standbys: x 2026-03-10T11:39:24.694 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:39:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:39:24] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-10T11:39:25.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:24 vm07 bash[17804]: cluster 2026-03-10T11:39:23.129573+0000 mgr.y (mgr.24859) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:25.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:24 vm07 bash[17804]: cluster 2026-03-10T11:39:24.171692+0000 mon.a (mon.0) 921 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-10T11:39:25.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:24 vm05 bash[22470]: cluster 2026-03-10T11:39:23.129573+0000 mgr.y (mgr.24859) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:25.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:24 vm05 bash[22470]: cluster 2026-03-10T11:39:24.171692+0000 mon.a (mon.0) 921 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-10T11:39:25.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:24 vm05 bash[17453]: cluster 2026-03-10T11:39:23.129573+0000 mgr.y (mgr.24859) 8 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:25.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:24 vm05 bash[17453]: cluster 2026-03-10T11:39:24.171692+0000 mon.a (mon.0) 921 : cluster [DBG] mgrmap e31: y(active, since 5s), standbys: x 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: cluster 2026-03-10T11:39:25.129881+0000 mgr.y (mgr.24859) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.270533+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.280429+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.424946+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.432177+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.827843+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.836292+0000 mon.a (mon.0) 927 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.837813+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:25.999724+0000 mon.a (mon.0) 929 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.530 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.005531+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.007944+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.008536+0000 mon.a (mon.0) 932 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.008890+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.153359+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.157691+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.161945+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.166365+0000 mon.a (mon.0) 937 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.170155+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.180681+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:26 vm05 bash[17453]: audit 2026-03-10T11:39:26.183503+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: cluster 2026-03-10T11:39:25.129881+0000 mgr.y (mgr.24859) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.270533+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.280429+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.424946+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.432177+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.827843+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.836292+0000 mon.a (mon.0) 927 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.837813+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:25.999724+0000 mon.a (mon.0) 929 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.005531+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.007944+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.008536+0000 mon.a (mon.0) 932 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.008890+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.153359+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.157691+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.161945+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.166365+0000 mon.a (mon.0) 937 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.170155+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.180681+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:39:26.531 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:26 vm05 bash[22470]: audit 2026-03-10T11:39:26.183503+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: cluster 2026-03-10T11:39:25.129881+0000 mgr.y (mgr.24859) 9 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.270533+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.280429+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.424946+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.432177+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.827843+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.836292+0000 mon.a (mon.0) 927 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.837813+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:25.999724+0000 mon.a (mon.0) 929 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.005531+0000 mon.a (mon.0) 930 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.007944+0000 mon.a (mon.0) 931 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.008536+0000 mon.a (mon.0) 932 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.008890+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.153359+0000 mon.a (mon.0) 934 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.157691+0000 mon.a (mon.0) 935 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.161945+0000 mon.a (mon.0) 936 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.166365+0000 mon.a (mon.0) 937 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.170155+0000 mon.a (mon.0) 938 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:26.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.180681+0000 mon.a (mon.0) 939 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:39:26.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:26 vm07 bash[17804]: audit 2026-03-10T11:39:26.183503+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 systemd[1]: Stopping Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.377Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.378Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.379Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.379Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T11:39:27.412 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[38631]: ts=2026-03-10T11:39:27.379Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.009470+0000 mgr.y (mgr.24859) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.009568+0000 mgr.y (mgr.24859) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.046490+0000 mgr.y (mgr.24859) 12 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.049186+0000 mgr.y (mgr.24859) 13 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.082504+0000 mgr.y (mgr.24859) 14 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.084703+0000 mgr.y (mgr.24859) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.113341+0000 mgr.y (mgr.24859) 16 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.117382+0000 mgr.y (mgr.24859) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.180452+0000 mgr.y (mgr.24859) 18 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.184218+0000 mgr.y (mgr.24859) 19 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:26.656774+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:26.664897+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.665961+0000 mgr.y (mgr.24859) 20 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.246276+0000 mon.a (mon.0) 943 : audit [DBG] from='client.? 192.168.123.105:0/871958815' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.466696+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.472363+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.474648+0000 mon.a (mon.0) 946 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.483333+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.484429+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.485331+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.489867+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.492363+0000 mon.a (mon.0) 951 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:39:27.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:27 vm07 bash[17804]: audit 2026-03-10T11:39:27.519784+0000 mon.a (mon.0) 952 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:39:27.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40777]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus-a 2026-03-10T11:39:27.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service: Deactivated successfully. 2026-03-10T11:39:27.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 systemd[1]: Stopped Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 systemd[1]: Started Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.585Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.585Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.585Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.585Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.585Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.587Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.587Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.588Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.588Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.590Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.590Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.374µs 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.590Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.600Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=3 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.616Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=3 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.630Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=3 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.631Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=3 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.631Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=38.321µs wal_replay_duration=41.310335ms wbl_replay_duration=120ns total_replay_duration=41.453883ms 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.633Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.633Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.633Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.642Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=9.063956ms db_storage=652ns remote_storage=1.173µs web_handler=290ns query_engine=461ns scrape=835.134µs scrape_sd=83.026µs notify=6.862µs notify_sd=5.42µs rules=7.832738ms tracing=3.437µs 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.643Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T11:39:27.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:39:27 vm07 bash[40852]: ts=2026-03-10T11:39:27.643Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T11:39:27.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.009470+0000 mgr.y (mgr.24859) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:39:27.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.009568+0000 mgr.y (mgr.24859) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:39:27.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.046490+0000 mgr.y (mgr.24859) 12 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:39:27.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.049186+0000 mgr.y (mgr.24859) 13 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:39:27.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.082504+0000 mgr.y (mgr.24859) 14 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:39:27.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.084703+0000 mgr.y (mgr.24859) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.113341+0000 mgr.y (mgr.24859) 16 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.117382+0000 mgr.y (mgr.24859) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.180452+0000 mgr.y (mgr.24859) 18 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.184218+0000 mgr.y (mgr.24859) 19 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:26.656774+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:26.664897+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.665961+0000 mgr.y (mgr.24859) 20 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.246276+0000 mon.a (mon.0) 943 : audit [DBG] from='client.? 192.168.123.105:0/871958815' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.466696+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.472363+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.474648+0000 mon.a (mon.0) 946 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.483333+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.484429+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.485331+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.489867+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.492363+0000 mon.a (mon.0) 951 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:27 vm05 bash[22470]: audit 2026-03-10T11:39:27.519784+0000 mon.a (mon.0) 952 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.009470+0000 mgr.y (mgr.24859) 10 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.009568+0000 mgr.y (mgr.24859) 11 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.046490+0000 mgr.y (mgr.24859) 12 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.049186+0000 mgr.y (mgr.24859) 13 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.082504+0000 mgr.y (mgr.24859) 14 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.084703+0000 mgr.y (mgr.24859) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.113341+0000 mgr.y (mgr.24859) 16 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.117382+0000 mgr.y (mgr.24859) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.180452+0000 mgr.y (mgr.24859) 18 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.184218+0000 mgr.y (mgr.24859) 19 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:26.656774+0000 mon.a (mon.0) 941 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:26.664897+0000 mon.a (mon.0) 942 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.665961+0000 mgr.y (mgr.24859) 20 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.246276+0000 mon.a (mon.0) 943 : audit [DBG] from='client.? 192.168.123.105:0/871958815' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.466696+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.472363+0000 mon.a (mon.0) 945 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.474648+0000 mon.a (mon.0) 946 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.483333+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.484429+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.485331+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.489867+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.492363+0000 mon.a (mon.0) 951 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:39:27.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:27 vm05 bash[17453]: audit 2026-03-10T11:39:27.519784+0000 mon.a (mon.0) 952 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: cephadm 2026-03-10T11:39:26.903590+0000 mgr.y (mgr.24859) 21 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: cluster 2026-03-10T11:39:27.130365+0000 mgr.y (mgr.24859) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: audit 2026-03-10T11:39:27.474994+0000 mgr.y (mgr.24859) 23 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: cephadm 2026-03-10T11:39:27.484242+0000 mgr.y (mgr.24859) 24 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: audit 2026-03-10T11:39:27.484698+0000 mgr.y (mgr.24859) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: audit 2026-03-10T11:39:27.485560+0000 mgr.y (mgr.24859) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:39:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:28 vm07 bash[17804]: audit 2026-03-10T11:39:27.492632+0000 mgr.y (mgr.24859) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: cephadm 2026-03-10T11:39:26.903590+0000 mgr.y (mgr.24859) 21 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: cluster 2026-03-10T11:39:27.130365+0000 mgr.y (mgr.24859) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: audit 2026-03-10T11:39:27.474994+0000 mgr.y (mgr.24859) 23 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: cephadm 2026-03-10T11:39:27.484242+0000 mgr.y (mgr.24859) 24 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: audit 2026-03-10T11:39:27.484698+0000 mgr.y (mgr.24859) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: audit 2026-03-10T11:39:27.485560+0000 mgr.y (mgr.24859) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:28 vm05 bash[22470]: audit 2026-03-10T11:39:27.492632+0000 mgr.y (mgr.24859) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:39:29.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: cephadm 2026-03-10T11:39:26.903590+0000 mgr.y (mgr.24859) 21 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:39:29.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: cluster 2026-03-10T11:39:27.130365+0000 mgr.y (mgr.24859) 22 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:39:29.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: audit 2026-03-10T11:39:27.474994+0000 mgr.y (mgr.24859) 23 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:39:29.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: cephadm 2026-03-10T11:39:27.484242+0000 mgr.y (mgr.24859) 24 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:39:29.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: audit 2026-03-10T11:39:27.484698+0000 mgr.y (mgr.24859) 25 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:39:29.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: audit 2026-03-10T11:39:27.485560+0000 mgr.y (mgr.24859) 26 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:39:29.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:28 vm05 bash[17453]: audit 2026-03-10T11:39:27.492632+0000 mgr.y (mgr.24859) 27 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:39:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:30 vm07 bash[17804]: cluster 2026-03-10T11:39:29.130633+0000 mgr.y (mgr.24859) 28 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:39:31.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:30 vm05 bash[22470]: cluster 2026-03-10T11:39:29.130633+0000 mgr.y (mgr.24859) 28 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:39:31.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:30 vm05 bash[17453]: cluster 2026-03-10T11:39:29.130633+0000 mgr.y (mgr.24859) 28 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:39:32.344 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:31 vm05 bash[17453]: cluster 2026-03-10T11:39:31.131198+0000 mgr.y (mgr.24859) 29 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:32.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:31 vm05 bash[22470]: cluster 2026-03-10T11:39:31.131198+0000 mgr.y (mgr.24859) 29 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:32.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:31 vm07 bash[17804]: cluster 2026-03-10T11:39:31.131198+0000 mgr.y (mgr.24859) 29 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:34.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.786940+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.793645+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.867793+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.875010+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.876347+0000 mon.a (mon.0) 957 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:34.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.876827+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:33 vm05 bash[22470]: audit 2026-03-10T11:39:32.880475+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.786940+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.793645+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.867793+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.875010+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.876347+0000 mon.a (mon.0) 957 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.876827+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:39:34.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:33 vm05 bash[17453]: audit 2026-03-10T11:39:32.880475+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.786940+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.793645+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.867793+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.875010+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.876347+0000 mon.a (mon.0) 957 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:39:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.876827+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:39:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:33 vm07 bash[17804]: audit 2026-03-10T11:39:32.880475+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:39:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:34 vm07 bash[17804]: cluster 2026-03-10T11:39:33.131522+0000 mgr.y (mgr.24859) 30 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:34 vm07 bash[17804]: audit 2026-03-10T11:39:34.559741+0000 mon.a (mon.0) 960 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:34 vm05 bash[17453]: cluster 2026-03-10T11:39:33.131522+0000 mgr.y (mgr.24859) 30 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:34 vm05 bash[17453]: audit 2026-03-10T11:39:34.559741+0000 mon.a (mon.0) 960 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:34 vm05 bash[22470]: cluster 2026-03-10T11:39:33.131522+0000 mgr.y (mgr.24859) 30 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:34 vm05 bash[22470]: audit 2026-03-10T11:39:34.559741+0000 mon.a (mon.0) 960 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:37.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:36 vm07 bash[17804]: cluster 2026-03-10T11:39:35.131841+0000 mgr.y (mgr.24859) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:37.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:36 vm05 bash[17453]: cluster 2026-03-10T11:39:35.131841+0000 mgr.y (mgr.24859) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:36 vm05 bash[22470]: cluster 2026-03-10T11:39:35.131841+0000 mgr.y (mgr.24859) 31 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:38 vm07 bash[17804]: audit 2026-03-10T11:39:37.082832+0000 mgr.y (mgr.24859) 32 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:39.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:38 vm07 bash[17804]: cluster 2026-03-10T11:39:37.132354+0000 mgr.y (mgr.24859) 33 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:38 vm05 bash[17453]: audit 2026-03-10T11:39:37.082832+0000 mgr.y (mgr.24859) 32 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:38 vm05 bash[17453]: cluster 2026-03-10T11:39:37.132354+0000 mgr.y (mgr.24859) 33 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:38 vm05 bash[22470]: audit 2026-03-10T11:39:37.082832+0000 mgr.y (mgr.24859) 32 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:38 vm05 bash[22470]: cluster 2026-03-10T11:39:37.132354+0000 mgr.y (mgr.24859) 33 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:39:39.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:39:38] "GET /metrics HTTP/1.1" 200 37542 "" "Prometheus/2.51.0" 2026-03-10T11:39:41.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:40 vm07 bash[17804]: cluster 2026-03-10T11:39:39.132674+0000 mgr.y (mgr.24859) 34 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T11:39:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:40 vm05 bash[17453]: cluster 2026-03-10T11:39:39.132674+0000 mgr.y (mgr.24859) 34 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T11:39:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:40 vm05 bash[22470]: cluster 2026-03-10T11:39:39.132674+0000 mgr.y (mgr.24859) 34 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 938 B/s rd, 1 op/s 2026-03-10T11:39:42.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:41 vm05 bash[17453]: cluster 2026-03-10T11:39:41.133141+0000 mgr.y (mgr.24859) 35 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:39:42.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:41 vm05 bash[22470]: cluster 2026-03-10T11:39:41.133141+0000 mgr.y (mgr.24859) 35 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:39:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:41 vm07 bash[17804]: cluster 2026-03-10T11:39:41.133141+0000 mgr.y (mgr.24859) 35 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:39:45.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:44 vm07 bash[17804]: cluster 2026-03-10T11:39:43.133444+0000 mgr.y (mgr.24859) 36 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:44 vm05 bash[17453]: cluster 2026-03-10T11:39:43.133444+0000 mgr.y (mgr.24859) 36 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:44 vm05 bash[22470]: cluster 2026-03-10T11:39:43.133444+0000 mgr.y (mgr.24859) 36 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:46.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:45 vm05 bash[17453]: cluster 2026-03-10T11:39:45.133792+0000 mgr.y (mgr.24859) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:46.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:45 vm05 bash[22470]: cluster 2026-03-10T11:39:45.133792+0000 mgr.y (mgr.24859) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:46.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:45 vm07 bash[17804]: cluster 2026-03-10T11:39:45.133792+0000 mgr.y (mgr.24859) 37 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:49.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:48 vm05 bash[17453]: audit 2026-03-10T11:39:47.091990+0000 mgr.y (mgr.24859) 38 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:49.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:48 vm05 bash[17453]: cluster 2026-03-10T11:39:47.134352+0000 mgr.y (mgr.24859) 39 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:49.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:48 vm05 bash[22470]: audit 2026-03-10T11:39:47.091990+0000 mgr.y (mgr.24859) 38 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:49.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:48 vm05 bash[22470]: cluster 2026-03-10T11:39:47.134352+0000 mgr.y (mgr.24859) 39 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:49.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:39:48] "GET /metrics HTTP/1.1" 200 37542 "" "Prometheus/2.51.0" 2026-03-10T11:39:49.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:48 vm07 bash[17804]: audit 2026-03-10T11:39:47.091990+0000 mgr.y (mgr.24859) 38 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:48 vm07 bash[17804]: cluster 2026-03-10T11:39:47.134352+0000 mgr.y (mgr.24859) 39 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:50.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:49 vm07 bash[17804]: audit 2026-03-10T11:39:49.559972+0000 mon.a (mon.0) 961 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:50.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:49 vm05 bash[17453]: audit 2026-03-10T11:39:49.559972+0000 mon.a (mon.0) 961 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:50.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:49 vm05 bash[22470]: audit 2026-03-10T11:39:49.559972+0000 mon.a (mon.0) 961 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:39:51.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:50 vm07 bash[17804]: cluster 2026-03-10T11:39:49.134634+0000 mgr.y (mgr.24859) 40 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:50 vm05 bash[17453]: cluster 2026-03-10T11:39:49.134634+0000 mgr.y (mgr.24859) 40 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:50 vm05 bash[22470]: cluster 2026-03-10T11:39:49.134634+0000 mgr.y (mgr.24859) 40 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:52.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:51 vm07 bash[17804]: cluster 2026-03-10T11:39:51.135132+0000 mgr.y (mgr.24859) 41 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:52.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:51 vm05 bash[17453]: cluster 2026-03-10T11:39:51.135132+0000 mgr.y (mgr.24859) 41 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:52.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:51 vm05 bash[22470]: cluster 2026-03-10T11:39:51.135132+0000 mgr.y (mgr.24859) 41 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:54 vm07 bash[17804]: cluster 2026-03-10T11:39:53.135418+0000 mgr.y (mgr.24859) 42 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:55.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:54 vm05 bash[17453]: cluster 2026-03-10T11:39:53.135418+0000 mgr.y (mgr.24859) 42 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:55.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:54 vm05 bash[22470]: cluster 2026-03-10T11:39:53.135418+0000 mgr.y (mgr.24859) 42 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:56.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:55 vm05 bash[17453]: cluster 2026-03-10T11:39:55.135707+0000 mgr.y (mgr.24859) 43 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:56.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:55 vm05 bash[22470]: cluster 2026-03-10T11:39:55.135707+0000 mgr.y (mgr.24859) 43 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:56.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:55 vm07 bash[17804]: cluster 2026-03-10T11:39:55.135707+0000 mgr.y (mgr.24859) 43 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:39:58.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:58 vm07 bash[17804]: audit 2026-03-10T11:39:57.099953+0000 mgr.y (mgr.24859) 44 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:58.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:39:58 vm07 bash[17804]: cluster 2026-03-10T11:39:57.136151+0000 mgr.y (mgr.24859) 45 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:58.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:58 vm05 bash[17453]: audit 2026-03-10T11:39:57.099953+0000 mgr.y (mgr.24859) 44 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:58.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:39:58 vm05 bash[17453]: cluster 2026-03-10T11:39:57.136151+0000 mgr.y (mgr.24859) 45 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:58.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:58 vm05 bash[22470]: audit 2026-03-10T11:39:57.099953+0000 mgr.y (mgr.24859) 44 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:39:58.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:39:58 vm05 bash[22470]: cluster 2026-03-10T11:39:57.136151+0000 mgr.y (mgr.24859) 45 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:39:59.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:39:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:39:58] "GET /metrics HTTP/1.1" 200 37539 "" "Prometheus/2.51.0" 2026-03-10T11:40:00.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:00 vm05 bash[17453]: cluster 2026-03-10T11:39:59.136461+0000 mgr.y (mgr.24859) 46 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:00.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:00 vm05 bash[17453]: cluster 2026-03-10T11:40:00.000103+0000 mon.a (mon.0) 962 : cluster [INF] overall HEALTH_OK 2026-03-10T11:40:00.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:00 vm05 bash[22470]: cluster 2026-03-10T11:39:59.136461+0000 mgr.y (mgr.24859) 46 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:00.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:00 vm05 bash[22470]: cluster 2026-03-10T11:40:00.000103+0000 mon.a (mon.0) 962 : cluster [INF] overall HEALTH_OK 2026-03-10T11:40:00.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:00 vm07 bash[17804]: cluster 2026-03-10T11:39:59.136461+0000 mgr.y (mgr.24859) 46 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:00.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:00 vm07 bash[17804]: cluster 2026-03-10T11:40:00.000103+0000 mon.a (mon.0) 962 : cluster [INF] overall HEALTH_OK 2026-03-10T11:40:02.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:02 vm05 bash[17453]: cluster 2026-03-10T11:40:01.136961+0000 mgr.y (mgr.24859) 47 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:02.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:02 vm05 bash[22470]: cluster 2026-03-10T11:40:01.136961+0000 mgr.y (mgr.24859) 47 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:02.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:02 vm07 bash[17804]: cluster 2026-03-10T11:40:01.136961+0000 mgr.y (mgr.24859) 47 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:04.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:03 vm07 bash[17804]: cluster 2026-03-10T11:40:03.137208+0000 mgr.y (mgr.24859) 48 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:04.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:03 vm05 bash[17453]: cluster 2026-03-10T11:40:03.137208+0000 mgr.y (mgr.24859) 48 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:04.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:03 vm05 bash[22470]: cluster 2026-03-10T11:40:03.137208+0000 mgr.y (mgr.24859) 48 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:05.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:04 vm07 bash[17804]: audit 2026-03-10T11:40:04.560045+0000 mon.a (mon.0) 963 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:04 vm05 bash[17453]: audit 2026-03-10T11:40:04.560045+0000 mon.a (mon.0) 963 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:04 vm05 bash[22470]: audit 2026-03-10T11:40:04.560045+0000 mon.a (mon.0) 963 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:06.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:05 vm07 bash[17804]: cluster 2026-03-10T11:40:05.137426+0000 mgr.y (mgr.24859) 49 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:06.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:05 vm05 bash[17453]: cluster 2026-03-10T11:40:05.137426+0000 mgr.y (mgr.24859) 49 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:06.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:05 vm05 bash[22470]: cluster 2026-03-10T11:40:05.137426+0000 mgr.y (mgr.24859) 49 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:08.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:08 vm07 bash[17804]: audit 2026-03-10T11:40:07.102749+0000 mgr.y (mgr.24859) 50 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:08.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:08 vm07 bash[17804]: cluster 2026-03-10T11:40:07.140813+0000 mgr.y (mgr.24859) 51 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:08.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:08 vm05 bash[17453]: audit 2026-03-10T11:40:07.102749+0000 mgr.y (mgr.24859) 50 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:08.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:08 vm05 bash[17453]: cluster 2026-03-10T11:40:07.140813+0000 mgr.y (mgr.24859) 51 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:08.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:08 vm05 bash[22470]: audit 2026-03-10T11:40:07.102749+0000 mgr.y (mgr.24859) 50 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:08.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:08 vm05 bash[22470]: cluster 2026-03-10T11:40:07.140813+0000 mgr.y (mgr.24859) 51 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:09.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:40:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:40:08] "GET /metrics HTTP/1.1" 200 37541 "" "Prometheus/2.51.0" 2026-03-10T11:40:10.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:10 vm05 bash[17453]: cluster 2026-03-10T11:40:09.141038+0000 mgr.y (mgr.24859) 52 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:10.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:10 vm05 bash[22470]: cluster 2026-03-10T11:40:09.141038+0000 mgr.y (mgr.24859) 52 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:10.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:10 vm07 bash[17804]: cluster 2026-03-10T11:40:09.141038+0000 mgr.y (mgr.24859) 52 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:12.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:12 vm05 bash[17453]: cluster 2026-03-10T11:40:11.141558+0000 mgr.y (mgr.24859) 53 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:12.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:12 vm05 bash[22470]: cluster 2026-03-10T11:40:11.141558+0000 mgr.y (mgr.24859) 53 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:12.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:12 vm07 bash[17804]: cluster 2026-03-10T11:40:11.141558+0000 mgr.y (mgr.24859) 53 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:14.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:13 vm07 bash[17804]: cluster 2026-03-10T11:40:13.141858+0000 mgr.y (mgr.24859) 54 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:14.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:13 vm05 bash[17453]: cluster 2026-03-10T11:40:13.141858+0000 mgr.y (mgr.24859) 54 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:14.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:13 vm05 bash[22470]: cluster 2026-03-10T11:40:13.141858+0000 mgr.y (mgr.24859) 54 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:16.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:16 vm05 bash[17453]: cluster 2026-03-10T11:40:15.142132+0000 mgr.y (mgr.24859) 55 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:16.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:16 vm05 bash[22470]: cluster 2026-03-10T11:40:15.142132+0000 mgr.y (mgr.24859) 55 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:16.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:16 vm07 bash[17804]: cluster 2026-03-10T11:40:15.142132+0000 mgr.y (mgr.24859) 55 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 852 B/s rd, 0 op/s 2026-03-10T11:40:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:18 vm05 bash[17453]: audit 2026-03-10T11:40:17.113300+0000 mgr.y (mgr.24859) 56 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:18 vm05 bash[17453]: cluster 2026-03-10T11:40:17.142595+0000 mgr.y (mgr.24859) 57 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:18 vm05 bash[22470]: audit 2026-03-10T11:40:17.113300+0000 mgr.y (mgr.24859) 56 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:18 vm05 bash[22470]: cluster 2026-03-10T11:40:17.142595+0000 mgr.y (mgr.24859) 57 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:18.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:18 vm07 bash[17804]: audit 2026-03-10T11:40:17.113300+0000 mgr.y (mgr.24859) 56 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:18.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:18 vm07 bash[17804]: cluster 2026-03-10T11:40:17.142595+0000 mgr.y (mgr.24859) 57 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:19.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:40:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:40:18] "GET /metrics HTTP/1.1" 200 37541 "" "Prometheus/2.51.0" 2026-03-10T11:40:20.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:20 vm05 bash[22470]: cluster 2026-03-10T11:40:19.142911+0000 mgr.y (mgr.24859) 58 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:20.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:20 vm05 bash[22470]: audit 2026-03-10T11:40:19.560318+0000 mon.a (mon.0) 964 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:20.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:20 vm05 bash[17453]: cluster 2026-03-10T11:40:19.142911+0000 mgr.y (mgr.24859) 58 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:20.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:20 vm05 bash[17453]: audit 2026-03-10T11:40:19.560318+0000 mon.a (mon.0) 964 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:20 vm07 bash[17804]: cluster 2026-03-10T11:40:19.142911+0000 mgr.y (mgr.24859) 58 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:20 vm07 bash[17804]: audit 2026-03-10T11:40:19.560318+0000 mon.a (mon.0) 964 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:22 vm05 bash[22470]: cluster 2026-03-10T11:40:21.143423+0000 mgr.y (mgr.24859) 59 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:22 vm05 bash[17453]: cluster 2026-03-10T11:40:21.143423+0000 mgr.y (mgr.24859) 59 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:22.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:22 vm07 bash[17804]: cluster 2026-03-10T11:40:21.143423+0000 mgr.y (mgr.24859) 59 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:24.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:23 vm07 bash[17804]: cluster 2026-03-10T11:40:23.143728+0000 mgr.y (mgr.24859) 60 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:24.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:23 vm05 bash[22470]: cluster 2026-03-10T11:40:23.143728+0000 mgr.y (mgr.24859) 60 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:23 vm05 bash[17453]: cluster 2026-03-10T11:40:23.143728+0000 mgr.y (mgr.24859) 60 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:26.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:26 vm05 bash[22470]: cluster 2026-03-10T11:40:25.144103+0000 mgr.y (mgr.24859) 61 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:26.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:26 vm05 bash[17453]: cluster 2026-03-10T11:40:25.144103+0000 mgr.y (mgr.24859) 61 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:26 vm07 bash[17804]: cluster 2026-03-10T11:40:25.144103+0000 mgr.y (mgr.24859) 61 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:28 vm05 bash[22470]: audit 2026-03-10T11:40:27.123933+0000 mgr.y (mgr.24859) 62 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:28 vm05 bash[22470]: cluster 2026-03-10T11:40:27.144621+0000 mgr.y (mgr.24859) 63 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:28 vm05 bash[17453]: audit 2026-03-10T11:40:27.123933+0000 mgr.y (mgr.24859) 62 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:28 vm05 bash[17453]: cluster 2026-03-10T11:40:27.144621+0000 mgr.y (mgr.24859) 63 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:28.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:28 vm07 bash[17804]: audit 2026-03-10T11:40:27.123933+0000 mgr.y (mgr.24859) 62 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:28.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:28 vm07 bash[17804]: cluster 2026-03-10T11:40:27.144621+0000 mgr.y (mgr.24859) 63 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:29.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:40:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:40:28] "GET /metrics HTTP/1.1" 200 37540 "" "Prometheus/2.51.0" 2026-03-10T11:40:30.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:30 vm05 bash[17453]: cluster 2026-03-10T11:40:29.144948+0000 mgr.y (mgr.24859) 64 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:30 vm05 bash[22470]: cluster 2026-03-10T11:40:29.144948+0000 mgr.y (mgr.24859) 64 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:30.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:30 vm07 bash[17804]: cluster 2026-03-10T11:40:29.144948+0000 mgr.y (mgr.24859) 64 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:32.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:32 vm05 bash[17453]: cluster 2026-03-10T11:40:31.145446+0000 mgr.y (mgr.24859) 65 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:32.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:32 vm05 bash[22470]: cluster 2026-03-10T11:40:31.145446+0000 mgr.y (mgr.24859) 65 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:32.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:32 vm07 bash[17804]: cluster 2026-03-10T11:40:31.145446+0000 mgr.y (mgr.24859) 65 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:33 vm05 bash[17453]: audit 2026-03-10T11:40:32.921375+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:33 vm05 bash[17453]: audit 2026-03-10T11:40:33.231192+0000 mon.a (mon.0) 966 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:33 vm05 bash[17453]: audit 2026-03-10T11:40:33.231718+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:33 vm05 bash[17453]: audit 2026-03-10T11:40:33.236364+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:33 vm05 bash[22470]: audit 2026-03-10T11:40:32.921375+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:33 vm05 bash[22470]: audit 2026-03-10T11:40:33.231192+0000 mon.a (mon.0) 966 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:33 vm05 bash[22470]: audit 2026-03-10T11:40:33.231718+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:40:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:33 vm05 bash[22470]: audit 2026-03-10T11:40:33.236364+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:40:33.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:33 vm07 bash[17804]: audit 2026-03-10T11:40:32.921375+0000 mon.a (mon.0) 965 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:40:33.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:33 vm07 bash[17804]: audit 2026-03-10T11:40:33.231192+0000 mon.a (mon.0) 966 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:40:33.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:33 vm07 bash[17804]: audit 2026-03-10T11:40:33.231718+0000 mon.a (mon.0) 967 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:40:33.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:33 vm07 bash[17804]: audit 2026-03-10T11:40:33.236364+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:40:34.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:34 vm05 bash[17453]: cluster 2026-03-10T11:40:33.145785+0000 mgr.y (mgr.24859) 66 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:34.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:34 vm05 bash[22470]: cluster 2026-03-10T11:40:33.145785+0000 mgr.y (mgr.24859) 66 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:34.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:34 vm07 bash[17804]: cluster 2026-03-10T11:40:33.145785+0000 mgr.y (mgr.24859) 66 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:35.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:35 vm05 bash[17453]: audit 2026-03-10T11:40:34.560374+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:35.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:35 vm05 bash[22470]: audit 2026-03-10T11:40:34.560374+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:35.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:35 vm07 bash[17804]: audit 2026-03-10T11:40:34.560374+0000 mon.a (mon.0) 969 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:36 vm05 bash[17453]: cluster 2026-03-10T11:40:35.146081+0000 mgr.y (mgr.24859) 67 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:36 vm05 bash[22470]: cluster 2026-03-10T11:40:35.146081+0000 mgr.y (mgr.24859) 67 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:36.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:36 vm07 bash[17804]: cluster 2026-03-10T11:40:35.146081+0000 mgr.y (mgr.24859) 67 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:38 vm05 bash[17453]: audit 2026-03-10T11:40:37.134527+0000 mgr.y (mgr.24859) 68 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:38 vm05 bash[17453]: cluster 2026-03-10T11:40:37.146509+0000 mgr.y (mgr.24859) 69 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:38 vm05 bash[22470]: audit 2026-03-10T11:40:37.134527+0000 mgr.y (mgr.24859) 68 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:38 vm05 bash[22470]: cluster 2026-03-10T11:40:37.146509+0000 mgr.y (mgr.24859) 69 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:38.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:38 vm07 bash[17804]: audit 2026-03-10T11:40:37.134527+0000 mgr.y (mgr.24859) 68 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:38 vm07 bash[17804]: cluster 2026-03-10T11:40:37.146509+0000 mgr.y (mgr.24859) 69 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:39.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:40:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:40:38] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-10T11:40:40.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:40 vm05 bash[17453]: cluster 2026-03-10T11:40:39.146801+0000 mgr.y (mgr.24859) 70 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:40.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:40 vm05 bash[22470]: cluster 2026-03-10T11:40:39.146801+0000 mgr.y (mgr.24859) 70 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:40.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:40 vm07 bash[17804]: cluster 2026-03-10T11:40:39.146801+0000 mgr.y (mgr.24859) 70 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:42.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:42 vm05 bash[17453]: cluster 2026-03-10T11:40:41.147289+0000 mgr.y (mgr.24859) 71 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:42 vm05 bash[22470]: cluster 2026-03-10T11:40:41.147289+0000 mgr.y (mgr.24859) 71 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:42.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:42 vm07 bash[17804]: cluster 2026-03-10T11:40:41.147289+0000 mgr.y (mgr.24859) 71 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:44.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:43 vm07 bash[17804]: cluster 2026-03-10T11:40:43.147580+0000 mgr.y (mgr.24859) 72 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:44.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:43 vm05 bash[17453]: cluster 2026-03-10T11:40:43.147580+0000 mgr.y (mgr.24859) 72 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:44.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:43 vm05 bash[22470]: cluster 2026-03-10T11:40:43.147580+0000 mgr.y (mgr.24859) 72 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:46.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:46 vm05 bash[17453]: cluster 2026-03-10T11:40:45.147888+0000 mgr.y (mgr.24859) 73 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:46.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:46 vm05 bash[22470]: cluster 2026-03-10T11:40:45.147888+0000 mgr.y (mgr.24859) 73 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:46.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:46 vm07 bash[17804]: cluster 2026-03-10T11:40:45.147888+0000 mgr.y (mgr.24859) 73 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:48.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:48 vm05 bash[17453]: audit 2026-03-10T11:40:47.138865+0000 mgr.y (mgr.24859) 74 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:48.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:48 vm05 bash[17453]: cluster 2026-03-10T11:40:47.148318+0000 mgr.y (mgr.24859) 75 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:48.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:48 vm05 bash[22470]: audit 2026-03-10T11:40:47.138865+0000 mgr.y (mgr.24859) 74 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:48.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:48 vm05 bash[22470]: cluster 2026-03-10T11:40:47.148318+0000 mgr.y (mgr.24859) 75 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:48.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:48 vm07 bash[17804]: audit 2026-03-10T11:40:47.138865+0000 mgr.y (mgr.24859) 74 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:48.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:48 vm07 bash[17804]: cluster 2026-03-10T11:40:47.148318+0000 mgr.y (mgr.24859) 75 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:49.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:40:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:40:48] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-10T11:40:50.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:50 vm05 bash[22470]: cluster 2026-03-10T11:40:49.148558+0000 mgr.y (mgr.24859) 76 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:50.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:50 vm05 bash[22470]: audit 2026-03-10T11:40:49.560625+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:50.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:50 vm05 bash[17453]: cluster 2026-03-10T11:40:49.148558+0000 mgr.y (mgr.24859) 76 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:50.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:50 vm05 bash[17453]: audit 2026-03-10T11:40:49.560625+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:50.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:50 vm07 bash[17804]: cluster 2026-03-10T11:40:49.148558+0000 mgr.y (mgr.24859) 76 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:40:50.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:50 vm07 bash[17804]: audit 2026-03-10T11:40:49.560625+0000 mon.a (mon.0) 970 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:40:52.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:52 vm05 bash[22470]: cluster 2026-03-10T11:40:51.149042+0000 mgr.y (mgr.24859) 77 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:52.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:52 vm05 bash[17453]: cluster 2026-03-10T11:40:51.149042+0000 mgr.y (mgr.24859) 77 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:52.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:52 vm07 bash[17804]: cluster 2026-03-10T11:40:51.149042+0000 mgr.y (mgr.24859) 77 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:53 vm05 bash[22470]: cluster 2026-03-10T11:40:53.149327+0000 mgr.y (mgr.24859) 78 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:40:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:53 vm05 bash[17453]: cluster 2026-03-10T11:40:53.149327+0000 mgr.y (mgr.24859) 78 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:40:54.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:53 vm07 bash[17804]: cluster 2026-03-10T11:40:53.149327+0000 mgr.y (mgr.24859) 78 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:40:56.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:56 vm05 bash[22470]: cluster 2026-03-10T11:40:55.149673+0000 mgr.y (mgr.24859) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:40:56.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:56 vm05 bash[17453]: cluster 2026-03-10T11:40:55.149673+0000 mgr.y (mgr.24859) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:40:56.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:56 vm07 bash[17804]: cluster 2026-03-10T11:40:55.149673+0000 mgr.y (mgr.24859) 79 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:40:58.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:58 vm05 bash[22470]: audit 2026-03-10T11:40:57.149383+0000 mgr.y (mgr.24859) 80 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:58.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:40:58 vm05 bash[22470]: cluster 2026-03-10T11:40:57.150086+0000 mgr.y (mgr.24859) 81 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:58.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:58 vm05 bash[17453]: audit 2026-03-10T11:40:57.149383+0000 mgr.y (mgr.24859) 80 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:58.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:40:58 vm05 bash[17453]: cluster 2026-03-10T11:40:57.150086+0000 mgr.y (mgr.24859) 81 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:58.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:58 vm07 bash[17804]: audit 2026-03-10T11:40:57.149383+0000 mgr.y (mgr.24859) 80 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:40:58.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:40:58 vm07 bash[17804]: cluster 2026-03-10T11:40:57.150086+0000 mgr.y (mgr.24859) 81 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:40:59.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:40:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:40:58] "GET /metrics HTTP/1.1" 200 37539 "" "Prometheus/2.51.0" 2026-03-10T11:41:00.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:00 vm05 bash[22470]: cluster 2026-03-10T11:40:59.150343+0000 mgr.y (mgr.24859) 82 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:41:00.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:00 vm05 bash[17453]: cluster 2026-03-10T11:40:59.150343+0000 mgr.y (mgr.24859) 82 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:41:00.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:00 vm07 bash[17804]: cluster 2026-03-10T11:40:59.150343+0000 mgr.y (mgr.24859) 82 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:41:02.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:02 vm05 bash[22470]: cluster 2026-03-10T11:41:01.150836+0000 mgr.y (mgr.24859) 83 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:02.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:02 vm05 bash[17453]: cluster 2026-03-10T11:41:01.150836+0000 mgr.y (mgr.24859) 83 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:02.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:02 vm07 bash[17804]: cluster 2026-03-10T11:41:01.150836+0000 mgr.y (mgr.24859) 83 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:04.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:03 vm07 bash[17804]: cluster 2026-03-10T11:41:03.151095+0000 mgr.y (mgr.24859) 84 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:04.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:03 vm05 bash[22470]: cluster 2026-03-10T11:41:03.151095+0000 mgr.y (mgr.24859) 84 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:04.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:03 vm05 bash[17453]: cluster 2026-03-10T11:41:03.151095+0000 mgr.y (mgr.24859) 84 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:04 vm05 bash[22470]: audit 2026-03-10T11:41:04.560789+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:04 vm05 bash[17453]: audit 2026-03-10T11:41:04.560789+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:05.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:04 vm07 bash[17804]: audit 2026-03-10T11:41:04.560789+0000 mon.a (mon.0) 971 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:06.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:05 vm05 bash[22470]: cluster 2026-03-10T11:41:05.151372+0000 mgr.y (mgr.24859) 85 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:06.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:05 vm05 bash[17453]: cluster 2026-03-10T11:41:05.151372+0000 mgr.y (mgr.24859) 85 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:06.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:05 vm07 bash[17804]: cluster 2026-03-10T11:41:05.151372+0000 mgr.y (mgr.24859) 85 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:08.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:08 vm05 bash[17453]: cluster 2026-03-10T11:41:07.151859+0000 mgr.y (mgr.24859) 86 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:08.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:08 vm05 bash[17453]: audit 2026-03-10T11:41:07.157126+0000 mgr.y (mgr.24859) 87 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:08.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:08 vm05 bash[22470]: cluster 2026-03-10T11:41:07.151859+0000 mgr.y (mgr.24859) 86 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:08.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:08 vm05 bash[22470]: audit 2026-03-10T11:41:07.157126+0000 mgr.y (mgr.24859) 87 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:08.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:08 vm07 bash[17804]: cluster 2026-03-10T11:41:07.151859+0000 mgr.y (mgr.24859) 86 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:08.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:08 vm07 bash[17804]: audit 2026-03-10T11:41:07.157126+0000 mgr.y (mgr.24859) 87 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:09.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:41:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:41:08] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-10T11:41:10.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:10 vm05 bash[17453]: cluster 2026-03-10T11:41:09.152153+0000 mgr.y (mgr.24859) 88 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:10.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:10 vm05 bash[22470]: cluster 2026-03-10T11:41:09.152153+0000 mgr.y (mgr.24859) 88 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:10.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:10 vm07 bash[17804]: cluster 2026-03-10T11:41:09.152153+0000 mgr.y (mgr.24859) 88 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:12.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:12 vm05 bash[17453]: cluster 2026-03-10T11:41:11.152628+0000 mgr.y (mgr.24859) 89 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:12.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:12 vm05 bash[22470]: cluster 2026-03-10T11:41:11.152628+0000 mgr.y (mgr.24859) 89 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:12.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:12 vm07 bash[17804]: cluster 2026-03-10T11:41:11.152628+0000 mgr.y (mgr.24859) 89 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:14.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:13 vm05 bash[17453]: cluster 2026-03-10T11:41:13.152920+0000 mgr.y (mgr.24859) 90 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:14.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:13 vm05 bash[22470]: cluster 2026-03-10T11:41:13.152920+0000 mgr.y (mgr.24859) 90 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:14.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:13 vm07 bash[17804]: cluster 2026-03-10T11:41:13.152920+0000 mgr.y (mgr.24859) 90 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:16.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:16 vm05 bash[17453]: cluster 2026-03-10T11:41:15.153199+0000 mgr.y (mgr.24859) 91 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:16.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:16 vm05 bash[22470]: cluster 2026-03-10T11:41:15.153199+0000 mgr.y (mgr.24859) 91 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:16.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:16 vm07 bash[17804]: cluster 2026-03-10T11:41:15.153199+0000 mgr.y (mgr.24859) 91 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:18 vm05 bash[17453]: cluster 2026-03-10T11:41:17.153767+0000 mgr.y (mgr.24859) 92 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:18 vm05 bash[17453]: audit 2026-03-10T11:41:17.167193+0000 mgr.y (mgr.24859) 93 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:18 vm05 bash[22470]: cluster 2026-03-10T11:41:17.153767+0000 mgr.y (mgr.24859) 92 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:18 vm05 bash[22470]: audit 2026-03-10T11:41:17.167193+0000 mgr.y (mgr.24859) 93 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:18.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:18 vm07 bash[17804]: cluster 2026-03-10T11:41:17.153767+0000 mgr.y (mgr.24859) 92 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:18.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:18 vm07 bash[17804]: audit 2026-03-10T11:41:17.167193+0000 mgr.y (mgr.24859) 93 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:19.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:41:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:41:18] "GET /metrics HTTP/1.1" 200 37537 "" "Prometheus/2.51.0" 2026-03-10T11:41:20.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:20 vm05 bash[22470]: cluster 2026-03-10T11:41:19.154109+0000 mgr.y (mgr.24859) 94 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:20.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:20 vm05 bash[22470]: audit 2026-03-10T11:41:19.560870+0000 mon.a (mon.0) 972 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:20.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:20 vm05 bash[17453]: cluster 2026-03-10T11:41:19.154109+0000 mgr.y (mgr.24859) 94 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:20.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:20 vm05 bash[17453]: audit 2026-03-10T11:41:19.560870+0000 mon.a (mon.0) 972 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:20.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:20 vm07 bash[17804]: cluster 2026-03-10T11:41:19.154109+0000 mgr.y (mgr.24859) 94 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:20.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:20 vm07 bash[17804]: audit 2026-03-10T11:41:19.560870+0000 mon.a (mon.0) 972 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:22 vm05 bash[22470]: cluster 2026-03-10T11:41:21.154618+0000 mgr.y (mgr.24859) 95 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:22.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:22 vm05 bash[17453]: cluster 2026-03-10T11:41:21.154618+0000 mgr.y (mgr.24859) 95 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:22.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:22 vm07 bash[17804]: cluster 2026-03-10T11:41:21.154618+0000 mgr.y (mgr.24859) 95 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:24.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:23 vm05 bash[22470]: cluster 2026-03-10T11:41:23.154912+0000 mgr.y (mgr.24859) 96 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:23 vm05 bash[17453]: cluster 2026-03-10T11:41:23.154912+0000 mgr.y (mgr.24859) 96 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:24.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:23 vm07 bash[17804]: cluster 2026-03-10T11:41:23.154912+0000 mgr.y (mgr.24859) 96 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:26.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:26 vm05 bash[22470]: cluster 2026-03-10T11:41:25.155218+0000 mgr.y (mgr.24859) 97 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:26.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:26 vm05 bash[17453]: cluster 2026-03-10T11:41:25.155218+0000 mgr.y (mgr.24859) 97 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:26.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:26 vm07 bash[17804]: cluster 2026-03-10T11:41:25.155218+0000 mgr.y (mgr.24859) 97 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:28 vm05 bash[22470]: cluster 2026-03-10T11:41:27.155758+0000 mgr.y (mgr.24859) 98 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:28 vm05 bash[22470]: audit 2026-03-10T11:41:27.176668+0000 mgr.y (mgr.24859) 99 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:28 vm05 bash[17453]: cluster 2026-03-10T11:41:27.155758+0000 mgr.y (mgr.24859) 98 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:28 vm05 bash[17453]: audit 2026-03-10T11:41:27.176668+0000 mgr.y (mgr.24859) 99 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:28.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:28 vm07 bash[17804]: cluster 2026-03-10T11:41:27.155758+0000 mgr.y (mgr.24859) 98 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:28.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:28 vm07 bash[17804]: audit 2026-03-10T11:41:27.176668+0000 mgr.y (mgr.24859) 99 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:29.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:41:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:41:28] "GET /metrics HTTP/1.1" 200 37534 "" "Prometheus/2.51.0" 2026-03-10T11:41:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:30 vm05 bash[22470]: cluster 2026-03-10T11:41:29.156011+0000 mgr.y (mgr.24859) 100 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:30.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:30 vm05 bash[17453]: cluster 2026-03-10T11:41:29.156011+0000 mgr.y (mgr.24859) 100 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:30.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:30 vm07 bash[17804]: cluster 2026-03-10T11:41:29.156011+0000 mgr.y (mgr.24859) 100 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:32.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:32 vm05 bash[22470]: cluster 2026-03-10T11:41:31.156473+0000 mgr.y (mgr.24859) 101 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:32.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:32 vm05 bash[17453]: cluster 2026-03-10T11:41:31.156473+0000 mgr.y (mgr.24859) 101 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:32.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:32 vm07 bash[17804]: cluster 2026-03-10T11:41:31.156473+0000 mgr.y (mgr.24859) 101 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:33 vm05 bash[22470]: cluster 2026-03-10T11:41:33.156753+0000 mgr.y (mgr.24859) 102 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:33 vm05 bash[22470]: audit 2026-03-10T11:41:33.283329+0000 mon.a (mon.0) 973 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:33 vm05 bash[22470]: audit 2026-03-10T11:41:33.578388+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:33 vm05 bash[22470]: audit 2026-03-10T11:41:33.579001+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:33 vm05 bash[22470]: audit 2026-03-10T11:41:33.584203+0000 mon.a (mon.0) 976 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:33 vm05 bash[17453]: cluster 2026-03-10T11:41:33.156753+0000 mgr.y (mgr.24859) 102 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:33 vm05 bash[17453]: audit 2026-03-10T11:41:33.283329+0000 mon.a (mon.0) 973 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:33 vm05 bash[17453]: audit 2026-03-10T11:41:33.578388+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:33 vm05 bash[17453]: audit 2026-03-10T11:41:33.579001+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:41:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:33 vm05 bash[17453]: audit 2026-03-10T11:41:33.584203+0000 mon.a (mon.0) 976 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:41:34.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:33 vm07 bash[17804]: cluster 2026-03-10T11:41:33.156753+0000 mgr.y (mgr.24859) 102 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:34.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:33 vm07 bash[17804]: audit 2026-03-10T11:41:33.283329+0000 mon.a (mon.0) 973 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:41:34.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:33 vm07 bash[17804]: audit 2026-03-10T11:41:33.578388+0000 mon.a (mon.0) 974 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:41:34.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:33 vm07 bash[17804]: audit 2026-03-10T11:41:33.579001+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:41:34.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:33 vm07 bash[17804]: audit 2026-03-10T11:41:33.584203+0000 mon.a (mon.0) 976 : audit [INF] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' 2026-03-10T11:41:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:34 vm05 bash[22470]: audit 2026-03-10T11:41:34.562526+0000 mon.a (mon.0) 977 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:34 vm05 bash[17453]: audit 2026-03-10T11:41:34.562526+0000 mon.a (mon.0) 977 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:35.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:34 vm07 bash[17804]: audit 2026-03-10T11:41:34.562526+0000 mon.a (mon.0) 977 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:36.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:35 vm05 bash[22470]: cluster 2026-03-10T11:41:35.157003+0000 mgr.y (mgr.24859) 103 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:36.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:35 vm05 bash[17453]: cluster 2026-03-10T11:41:35.157003+0000 mgr.y (mgr.24859) 103 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:36.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:35 vm07 bash[17804]: cluster 2026-03-10T11:41:35.157003+0000 mgr.y (mgr.24859) 103 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:38 vm05 bash[22470]: cluster 2026-03-10T11:41:37.157543+0000 mgr.y (mgr.24859) 104 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:38 vm05 bash[22470]: audit 2026-03-10T11:41:37.184092+0000 mgr.y (mgr.24859) 105 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:38 vm05 bash[17453]: cluster 2026-03-10T11:41:37.157543+0000 mgr.y (mgr.24859) 104 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:38 vm05 bash[17453]: audit 2026-03-10T11:41:37.184092+0000 mgr.y (mgr.24859) 105 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:38.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:38 vm07 bash[17804]: cluster 2026-03-10T11:41:37.157543+0000 mgr.y (mgr.24859) 104 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:38.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:38 vm07 bash[17804]: audit 2026-03-10T11:41:37.184092+0000 mgr.y (mgr.24859) 105 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:39.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:41:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:41:38] "GET /metrics HTTP/1.1" 200 37550 "" "Prometheus/2.51.0" 2026-03-10T11:41:40.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:40 vm05 bash[22470]: cluster 2026-03-10T11:41:39.157816+0000 mgr.y (mgr.24859) 106 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:40.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:40 vm05 bash[17453]: cluster 2026-03-10T11:41:39.157816+0000 mgr.y (mgr.24859) 106 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:40.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:40 vm07 bash[17804]: cluster 2026-03-10T11:41:39.157816+0000 mgr.y (mgr.24859) 106 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:42 vm05 bash[22470]: cluster 2026-03-10T11:41:41.158322+0000 mgr.y (mgr.24859) 107 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:42.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:42 vm05 bash[17453]: cluster 2026-03-10T11:41:41.158322+0000 mgr.y (mgr.24859) 107 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:42.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:42 vm07 bash[17804]: cluster 2026-03-10T11:41:41.158322+0000 mgr.y (mgr.24859) 107 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:44.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:43 vm05 bash[22470]: cluster 2026-03-10T11:41:43.158578+0000 mgr.y (mgr.24859) 108 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:44.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:43 vm05 bash[17453]: cluster 2026-03-10T11:41:43.158578+0000 mgr.y (mgr.24859) 108 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:44.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:43 vm07 bash[17804]: cluster 2026-03-10T11:41:43.158578+0000 mgr.y (mgr.24859) 108 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:46.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:46 vm05 bash[22470]: cluster 2026-03-10T11:41:45.158774+0000 mgr.y (mgr.24859) 109 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:46.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:46 vm05 bash[17453]: cluster 2026-03-10T11:41:45.158774+0000 mgr.y (mgr.24859) 109 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:46.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:46 vm07 bash[17804]: cluster 2026-03-10T11:41:45.158774+0000 mgr.y (mgr.24859) 109 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:48.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:48 vm05 bash[22470]: cluster 2026-03-10T11:41:47.159239+0000 mgr.y (mgr.24859) 110 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:48.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:48 vm05 bash[22470]: audit 2026-03-10T11:41:47.191696+0000 mgr.y (mgr.24859) 111 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:48.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:48 vm05 bash[17453]: cluster 2026-03-10T11:41:47.159239+0000 mgr.y (mgr.24859) 110 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:48.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:48 vm05 bash[17453]: audit 2026-03-10T11:41:47.191696+0000 mgr.y (mgr.24859) 111 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:48.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:48 vm07 bash[17804]: cluster 2026-03-10T11:41:47.159239+0000 mgr.y (mgr.24859) 110 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:48.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:48 vm07 bash[17804]: audit 2026-03-10T11:41:47.191696+0000 mgr.y (mgr.24859) 111 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:49.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:41:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:41:48] "GET /metrics HTTP/1.1" 200 37550 "" "Prometheus/2.51.0" 2026-03-10T11:41:50.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:50 vm05 bash[17453]: cluster 2026-03-10T11:41:49.159493+0000 mgr.y (mgr.24859) 112 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:50.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:50 vm05 bash[17453]: audit 2026-03-10T11:41:49.562972+0000 mon.a (mon.0) 978 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:50.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:50 vm05 bash[22470]: cluster 2026-03-10T11:41:49.159493+0000 mgr.y (mgr.24859) 112 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:50.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:50 vm05 bash[22470]: audit 2026-03-10T11:41:49.562972+0000 mon.a (mon.0) 978 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:50.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:50 vm07 bash[17804]: cluster 2026-03-10T11:41:49.159493+0000 mgr.y (mgr.24859) 112 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:50.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:50 vm07 bash[17804]: audit 2026-03-10T11:41:49.562972+0000 mon.a (mon.0) 978 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:41:52.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:52 vm05 bash[22470]: cluster 2026-03-10T11:41:51.159980+0000 mgr.y (mgr.24859) 113 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:52.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:52 vm05 bash[17453]: cluster 2026-03-10T11:41:51.159980+0000 mgr.y (mgr.24859) 113 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:52.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:52 vm07 bash[17804]: cluster 2026-03-10T11:41:51.159980+0000 mgr.y (mgr.24859) 113 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:53 vm05 bash[17453]: cluster 2026-03-10T11:41:53.160266+0000 mgr.y (mgr.24859) 114 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:53 vm05 bash[22470]: cluster 2026-03-10T11:41:53.160266+0000 mgr.y (mgr.24859) 114 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:54.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:53 vm07 bash[17804]: cluster 2026-03-10T11:41:53.160266+0000 mgr.y (mgr.24859) 114 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:56.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:56 vm05 bash[17453]: cluster 2026-03-10T11:41:55.160556+0000 mgr.y (mgr.24859) 115 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:56.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:56 vm05 bash[22470]: cluster 2026-03-10T11:41:55.160556+0000 mgr.y (mgr.24859) 115 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:56.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:56 vm07 bash[17804]: cluster 2026-03-10T11:41:55.160556+0000 mgr.y (mgr.24859) 115 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:41:58.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:58 vm05 bash[17453]: cluster 2026-03-10T11:41:57.161149+0000 mgr.y (mgr.24859) 116 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:58.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:41:58 vm05 bash[17453]: audit 2026-03-10T11:41:57.199547+0000 mgr.y (mgr.24859) 117 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:58.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:58 vm05 bash[22470]: cluster 2026-03-10T11:41:57.161149+0000 mgr.y (mgr.24859) 116 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:58.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:41:58 vm05 bash[22470]: audit 2026-03-10T11:41:57.199547+0000 mgr.y (mgr.24859) 117 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:58.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:58 vm07 bash[17804]: cluster 2026-03-10T11:41:57.161149+0000 mgr.y (mgr.24859) 116 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:41:58.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:41:58 vm07 bash[17804]: audit 2026-03-10T11:41:57.199547+0000 mgr.y (mgr.24859) 117 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:41:59.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:41:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:41:58] "GET /metrics HTTP/1.1" 200 37550 "" "Prometheus/2.51.0" 2026-03-10T11:42:00.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:00 vm05 bash[22470]: cluster 2026-03-10T11:41:59.161425+0000 mgr.y (mgr.24859) 118 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:00.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:00 vm05 bash[17453]: cluster 2026-03-10T11:41:59.161425+0000 mgr.y (mgr.24859) 118 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:00.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:00 vm07 bash[17804]: cluster 2026-03-10T11:41:59.161425+0000 mgr.y (mgr.24859) 118 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:02.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:02 vm05 bash[22470]: cluster 2026-03-10T11:42:01.161965+0000 mgr.y (mgr.24859) 119 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:02.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:02 vm05 bash[17453]: cluster 2026-03-10T11:42:01.161965+0000 mgr.y (mgr.24859) 119 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:02.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:02 vm07 bash[17804]: cluster 2026-03-10T11:42:01.161965+0000 mgr.y (mgr.24859) 119 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:04.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:03 vm05 bash[22470]: cluster 2026-03-10T11:42:03.162249+0000 mgr.y (mgr.24859) 120 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:04.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:03 vm05 bash[17453]: cluster 2026-03-10T11:42:03.162249+0000 mgr.y (mgr.24859) 120 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:04.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:03 vm07 bash[17804]: cluster 2026-03-10T11:42:03.162249+0000 mgr.y (mgr.24859) 120 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:04 vm05 bash[22470]: audit 2026-03-10T11:42:04.563161+0000 mon.a (mon.0) 979 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:04 vm05 bash[17453]: audit 2026-03-10T11:42:04.563161+0000 mon.a (mon.0) 979 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:05.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:04 vm07 bash[17804]: audit 2026-03-10T11:42:04.563161+0000 mon.a (mon.0) 979 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:06.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:05 vm05 bash[22470]: cluster 2026-03-10T11:42:05.162590+0000 mgr.y (mgr.24859) 121 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:06.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:05 vm05 bash[17453]: cluster 2026-03-10T11:42:05.162590+0000 mgr.y (mgr.24859) 121 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:06.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:05 vm07 bash[17804]: cluster 2026-03-10T11:42:05.162590+0000 mgr.y (mgr.24859) 121 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:08.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:08 vm05 bash[22470]: cluster 2026-03-10T11:42:07.163068+0000 mgr.y (mgr.24859) 122 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:08.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:08 vm05 bash[22470]: audit 2026-03-10T11:42:07.207858+0000 mgr.y (mgr.24859) 123 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:08.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:08 vm05 bash[17453]: cluster 2026-03-10T11:42:07.163068+0000 mgr.y (mgr.24859) 122 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:08.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:08 vm05 bash[17453]: audit 2026-03-10T11:42:07.207858+0000 mgr.y (mgr.24859) 123 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:08.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:08 vm07 bash[17804]: cluster 2026-03-10T11:42:07.163068+0000 mgr.y (mgr.24859) 122 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:08.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:08 vm07 bash[17804]: audit 2026-03-10T11:42:07.207858+0000 mgr.y (mgr.24859) 123 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:09.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:42:08] "GET /metrics HTTP/1.1" 200 37548 "" "Prometheus/2.51.0" 2026-03-10T11:42:10.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:10 vm05 bash[17453]: cluster 2026-03-10T11:42:09.163348+0000 mgr.y (mgr.24859) 124 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:10.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:10 vm05 bash[22470]: cluster 2026-03-10T11:42:09.163348+0000 mgr.y (mgr.24859) 124 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:10.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:10 vm07 bash[17804]: cluster 2026-03-10T11:42:09.163348+0000 mgr.y (mgr.24859) 124 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:12.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:12 vm05 bash[17453]: cluster 2026-03-10T11:42:11.163954+0000 mgr.y (mgr.24859) 125 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:12.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:12 vm05 bash[22470]: cluster 2026-03-10T11:42:11.163954+0000 mgr.y (mgr.24859) 125 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:12.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:12 vm07 bash[17804]: cluster 2026-03-10T11:42:11.163954+0000 mgr.y (mgr.24859) 125 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:14.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:13 vm05 bash[17453]: cluster 2026-03-10T11:42:13.164244+0000 mgr.y (mgr.24859) 126 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:14.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:13 vm05 bash[22470]: cluster 2026-03-10T11:42:13.164244+0000 mgr.y (mgr.24859) 126 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:14.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:13 vm07 bash[17804]: cluster 2026-03-10T11:42:13.164244+0000 mgr.y (mgr.24859) 126 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:16.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:16 vm05 bash[17453]: cluster 2026-03-10T11:42:15.164547+0000 mgr.y (mgr.24859) 127 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:16.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:16 vm05 bash[22470]: cluster 2026-03-10T11:42:15.164547+0000 mgr.y (mgr.24859) 127 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:16.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:16 vm07 bash[17804]: cluster 2026-03-10T11:42:15.164547+0000 mgr.y (mgr.24859) 127 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:18 vm05 bash[17453]: cluster 2026-03-10T11:42:17.164990+0000 mgr.y (mgr.24859) 128 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:18 vm05 bash[17453]: audit 2026-03-10T11:42:17.214731+0000 mgr.y (mgr.24859) 129 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:18 vm05 bash[22470]: cluster 2026-03-10T11:42:17.164990+0000 mgr.y (mgr.24859) 128 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:18 vm05 bash[22470]: audit 2026-03-10T11:42:17.214731+0000 mgr.y (mgr.24859) 129 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:18.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:18 vm07 bash[17804]: cluster 2026-03-10T11:42:17.164990+0000 mgr.y (mgr.24859) 128 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:18.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:18 vm07 bash[17804]: audit 2026-03-10T11:42:17.214731+0000 mgr.y (mgr.24859) 129 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:19.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:42:18] "GET /metrics HTTP/1.1" 200 37548 "" "Prometheus/2.51.0" 2026-03-10T11:42:19.467 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:42:19.900 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (8m) 2m ago 15m 14.0M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (8m) 2m ago 15m 38.1M - dad864ee21e9 ea7bd1695c30 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 2m ago 15m 41.3M - 3.5 e1d6a67b021e 5af37baefd5f 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283 running (10m) 2m ago 18m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 29cf7638c524 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (5m) 2m ago 19m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (19m) 2m ago 19m 59.7M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (18m) 2m ago 18m 48.5M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (18m) 2m ago 18m 46.4M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (8m) 2m ago 15m 7768k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (8m) 2m ago 15m 7736k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (18m) 2m ago 18m 51.4M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (17m) 2m ago 17m 54.8M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (17m) 2m ago 17m 50.4M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (17m) 2m ago 17m 52.9M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (17m) 2m ago 17m 53.2M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (16m) 2m ago 16m 50.1M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (16m) 2m ago 16m 48.6M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (16m) 2m ago 16m 51.2M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 2m ago 15m 36.5M - 2.51.0 1d3b7f56885b 7f09dc700f9b 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (15m) 2m ago 15m 85.2M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:42:19.901 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (15m) 2m ago 15m 86.4M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:42:19.948 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:42:20.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:20 vm05 bash[22470]: cluster 2026-03-10T11:42:19.165252+0000 mgr.y (mgr.24859) 130 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:20.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:20 vm05 bash[22470]: audit 2026-03-10T11:42:19.563219+0000 mon.a (mon.0) 980 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:20.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:20 vm05 bash[22470]: audit 2026-03-10T11:42:19.899405+0000 mgr.y (mgr.24859) 131 : audit [DBG] from='client.24940 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:42:20.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:20 vm05 bash[17453]: cluster 2026-03-10T11:42:19.165252+0000 mgr.y (mgr.24859) 130 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:20.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:20 vm05 bash[17453]: audit 2026-03-10T11:42:19.563219+0000 mon.a (mon.0) 980 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:20.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:20 vm05 bash[17453]: audit 2026-03-10T11:42:19.899405+0000 mgr.y (mgr.24859) 131 : audit [DBG] from='client.24940 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:42:20.406 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:42:20.459 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph -s' 2026-03-10T11:42:20.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:20 vm07 bash[17804]: cluster 2026-03-10T11:42:19.165252+0000 mgr.y (mgr.24859) 130 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:20.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:20 vm07 bash[17804]: audit 2026-03-10T11:42:19.563219+0000 mon.a (mon.0) 980 : audit [DBG] from='mgr.24859 192.168.123.105:0/2800584692' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:20 vm07 bash[17804]: audit 2026-03-10T11:42:19.899405+0000 mgr.y (mgr.24859) 131 : audit [DBG] from='client.24940 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: cluster: 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: id: 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: health: HEALTH_OK 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: services: 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: mon: 3 daemons, quorum a,c,b (age 18m) 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: mgr: y(active, since 3m), standbys: x 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: osd: 8 osds: 8 up (since 16m), 8 in (since 16m) 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: rgw: 2 daemons active (2 hosts, 1 zones) 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: data: 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: pools: 6 pools, 161 pgs 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: objects: 209 objects, 457 KiB 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: usage: 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: pgs: 161 active+clean 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:42:20.913 INFO:teuthology.orchestra.run.vm05.stdout: io: 2026-03-10T11:42:20.914 INFO:teuthology.orchestra.run.vm05.stdout: client: 853 B/s rd, 0 op/s rd, 0 op/s wr 2026-03-10T11:42:20.914 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-10T11:42:20.964 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:42:21.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:21 vm05 bash[22470]: audit 2026-03-10T11:42:20.408884+0000 mon.a (mon.0) 981 : audit [DBG] from='client.? 192.168.123.105:0/3034253833' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:42:21.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:21 vm05 bash[22470]: audit 2026-03-10T11:42:20.915822+0000 mon.c (mon.1) 53 : audit [DBG] from='client.? 192.168.123.105:0/1203301042' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:42:21.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:21 vm05 bash[17453]: audit 2026-03-10T11:42:20.408884+0000 mon.a (mon.0) 981 : audit [DBG] from='client.? 192.168.123.105:0/3034253833' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:42:21.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:21 vm05 bash[17453]: audit 2026-03-10T11:42:20.915822+0000 mon.c (mon.1) 53 : audit [DBG] from='client.? 192.168.123.105:0/1203301042' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:42:21.446 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:42:21.495 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 1'"'"'' 2026-03-10T11:42:21.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:21 vm07 bash[17804]: audit 2026-03-10T11:42:20.408884+0000 mon.a (mon.0) 981 : audit [DBG] from='client.? 192.168.123.105:0/3034253833' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:42:21.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:21 vm07 bash[17804]: audit 2026-03-10T11:42:20.915822+0000 mon.c (mon.1) 53 : audit [DBG] from='client.? 192.168.123.105:0/1203301042' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch 2026-03-10T11:42:21.952 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:42:21.990 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph mgr fail' 2026-03-10T11:42:22.241 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:22 vm05 bash[17453]: cluster 2026-03-10T11:42:21.165876+0000 mgr.y (mgr.24859) 132 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:22.241 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:22 vm05 bash[17453]: audit 2026-03-10T11:42:21.448216+0000 mon.c (mon.1) 54 : audit [DBG] from='client.? 192.168.123.105:0/3229332979' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:42:22.241 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:22 vm05 bash[17453]: audit 2026-03-10T11:42:21.944597+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.105:0/3463324238' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:42:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:22 vm05 bash[22470]: cluster 2026-03-10T11:42:21.165876+0000 mgr.y (mgr.24859) 132 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:22 vm05 bash[22470]: audit 2026-03-10T11:42:21.448216+0000 mon.c (mon.1) 54 : audit [DBG] from='client.? 192.168.123.105:0/3229332979' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:42:22.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:22 vm05 bash[22470]: audit 2026-03-10T11:42:21.944597+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.105:0/3463324238' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:42:22.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:22 vm07 bash[17804]: cluster 2026-03-10T11:42:21.165876+0000 mgr.y (mgr.24859) 132 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:22.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:22 vm07 bash[17804]: audit 2026-03-10T11:42:21.448216+0000 mon.c (mon.1) 54 : audit [DBG] from='client.? 192.168.123.105:0/3229332979' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:42:22.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:22 vm07 bash[17804]: audit 2026-03-10T11:42:21.944597+0000 mon.a (mon.0) 982 : audit [DBG] from='client.? 192.168.123.105:0/3463324238' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:42:23.353 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 180' 2026-03-10T11:42:23.503 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:23 vm05 bash[22470]: audit 2026-03-10T11:42:22.417649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.105:0/2966571344' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:23 vm05 bash[22470]: cluster 2026-03-10T11:42:22.424306+0000 mon.a (mon.0) 984 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:23 vm05 bash[17453]: audit 2026-03-10T11:42:22.417649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.105:0/2966571344' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:23 vm05 bash[17453]: cluster 2026-03-10T11:42:22.424306+0000 mon.a (mon.0) 984 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: ignoring --setuser ceph since I am not root 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: ignoring --setgroup ceph since I am not root 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: debug 2026-03-10T11:42:23.314+0000 7f259bfb1640 1 -- 192.168.123.105:0/2392168765 <== mon.0 v2:192.168.123.105:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55ba3f64d4a0 con 0x55ba3f64f800 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: debug 2026-03-10T11:42:23.394+0000 7f259e80e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:42:23.504 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: debug 2026-03-10T11:42:23.426+0000 7f259e80e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:42:23.569 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:23 vm07 bash[36672]: [10/Mar/2026:11:42:23] ENGINE Bus STOPPING 2026-03-10T11:42:23.569 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:23 vm07 bash[17804]: audit 2026-03-10T11:42:22.417649+0000 mon.a (mon.0) 983 : audit [INF] from='client.? 192.168.123.105:0/2966571344' entity='client.admin' cmd=[{"prefix": "mgr fail"}]: dispatch 2026-03-10T11:42:23.569 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:23 vm07 bash[17804]: cluster 2026-03-10T11:42:22.424306+0000 mon.a (mon.0) 984 : cluster [DBG] osdmap e91: 8 total, 8 up, 8 in 2026-03-10T11:42:23.647 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: debug 2026-03-10T11:42:23.574+0000 7f259e80e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:42:23.828 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:23 vm07 bash[36672]: [10/Mar/2026:11:42:23] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:42:23.828 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:23 vm07 bash[36672]: [10/Mar/2026:11:42:23] ENGINE Bus STOPPED 2026-03-10T11:42:23.828 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:23 vm07 bash[36672]: [10/Mar/2026:11:42:23] ENGINE Bus STARTING 2026-03-10T11:42:24.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:23 vm05 bash[53899]: debug 2026-03-10T11:42:23.878+0000 7f259e80e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:42:24.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:23 vm07 bash[36672]: [10/Mar/2026:11:42:23] ENGINE Serving on http://:::9283 2026-03-10T11:42:24.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:23 vm07 bash[36672]: [10/Mar/2026:11:42:23] ENGINE Bus STARTED 2026-03-10T11:42:24.552 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.256417+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.105:0/2966571344' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: cluster 2026-03-10T11:42:23.256629+0000 mon.a (mon.0) 986 : cluster [DBG] mgrmap e32: x(active, starting, since 0.837029s) 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.260483+0000 mon.b (mon.2) 217 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.260765+0000 mon.b (mon.2) 218 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.261041+0000 mon.b (mon.2) 219 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.262166+0000 mon.b (mon.2) 220 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.262466+0000 mon.b (mon.2) 221 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.262772+0000 mon.b (mon.2) 222 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.263096+0000 mon.b (mon.2) 223 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.263580+0000 mon.b (mon.2) 224 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.264005+0000 mon.b (mon.2) 225 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.264400+0000 mon.b (mon.2) 226 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.264770+0000 mon.b (mon.2) 227 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.265153+0000 mon.b (mon.2) 228 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.265625+0000 mon.b (mon.2) 229 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.265955+0000 mon.b (mon.2) 230 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.266422+0000 mon.b (mon.2) 231 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: cluster 2026-03-10T11:42:23.579846+0000 mon.a (mon.0) 987 : cluster [INF] Manager daemon x is now available 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.606134+0000 mon.b (mon.2) 232 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.617999+0000 mon.b (mon.2) 233 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.619531+0000 mon.a (mon.0) 988 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.619624+0000 mon.b (mon.2) 234 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.661510+0000 mon.b (mon.2) 235 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:24 vm05 bash[22470]: audit 2026-03-10T11:42:23.662364+0000 mon.a (mon.0) 989 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.256417+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.105:0/2966571344' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: cluster 2026-03-10T11:42:23.256629+0000 mon.a (mon.0) 986 : cluster [DBG] mgrmap e32: x(active, starting, since 0.837029s) 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.260483+0000 mon.b (mon.2) 217 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.260765+0000 mon.b (mon.2) 218 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.261041+0000 mon.b (mon.2) 219 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.262166+0000 mon.b (mon.2) 220 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.262466+0000 mon.b (mon.2) 221 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.262772+0000 mon.b (mon.2) 222 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.263096+0000 mon.b (mon.2) 223 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.263580+0000 mon.b (mon.2) 224 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.264005+0000 mon.b (mon.2) 225 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:42:24.553 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.264400+0000 mon.b (mon.2) 226 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.264770+0000 mon.b (mon.2) 227 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.265153+0000 mon.b (mon.2) 228 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.265625+0000 mon.b (mon.2) 229 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.265955+0000 mon.b (mon.2) 230 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.266422+0000 mon.b (mon.2) 231 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: cluster 2026-03-10T11:42:23.579846+0000 mon.a (mon.0) 987 : cluster [INF] Manager daemon x is now available 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.606134+0000 mon.b (mon.2) 232 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.617999+0000 mon.b (mon.2) 233 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.619531+0000 mon.a (mon.0) 988 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.619624+0000 mon.b (mon.2) 234 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.661510+0000 mon.b (mon.2) 235 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:24 vm05 bash[17453]: audit 2026-03-10T11:42:23.662364+0000 mon.a (mon.0) 989 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.358+0000 7f259e80e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:42:24.554 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.438+0000 7f259e80e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.256417+0000 mon.a (mon.0) 985 : audit [INF] from='client.? 192.168.123.105:0/2966571344' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: cluster 2026-03-10T11:42:23.256629+0000 mon.a (mon.0) 986 : cluster [DBG] mgrmap e32: x(active, starting, since 0.837029s) 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.260483+0000 mon.b (mon.2) 217 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.260765+0000 mon.b (mon.2) 218 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.261041+0000 mon.b (mon.2) 219 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.262166+0000 mon.b (mon.2) 220 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.262466+0000 mon.b (mon.2) 221 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.262772+0000 mon.b (mon.2) 222 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.263096+0000 mon.b (mon.2) 223 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.263580+0000 mon.b (mon.2) 224 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.264005+0000 mon.b (mon.2) 225 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.264400+0000 mon.b (mon.2) 226 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.264770+0000 mon.b (mon.2) 227 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.265153+0000 mon.b (mon.2) 228 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.265625+0000 mon.b (mon.2) 229 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.265955+0000 mon.b (mon.2) 230 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.266422+0000 mon.b (mon.2) 231 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: cluster 2026-03-10T11:42:23.579846+0000 mon.a (mon.0) 987 : cluster [INF] Manager daemon x is now available 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.606134+0000 mon.b (mon.2) 232 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.617999+0000 mon.b (mon.2) 233 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.619531+0000 mon.a (mon.0) 988 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.619624+0000 mon.b (mon.2) 234 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.661510+0000 mon.b (mon.2) 235 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:42:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:24 vm07 bash[17804]: audit 2026-03-10T11:42:23.662364+0000 mon.a (mon.0) 989 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: from numpy import show_config as show_numpy_config 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.558+0000 7f259e80e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.682+0000 7f259e80e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.722+0000 7f259e80e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.754+0000 7f259e80e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:42:24.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.790+0000 7f259e80e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:42:25.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:24 vm05 bash[53899]: debug 2026-03-10T11:42:24.838+0000 7f259e80e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:25 vm05 bash[22470]: cluster 2026-03-10T11:42:24.281390+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e33: x(active, since 1.8618s) 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:25 vm05 bash[22470]: cephadm 2026-03-10T11:42:24.601941+0000 mgr.x (mgr.24770) 2 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Bus STARTING 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:25 vm05 bash[22470]: cephadm 2026-03-10T11:42:24.703204+0000 mgr.x (mgr.24770) 3 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:25 vm05 bash[22470]: cephadm 2026-03-10T11:42:24.811569+0000 mgr.x (mgr.24770) 4 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:25 vm05 bash[22470]: cephadm 2026-03-10T11:42:24.811636+0000 mgr.x (mgr.24770) 5 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Bus STARTED 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:25 vm05 bash[22470]: cephadm 2026-03-10T11:42:24.811859+0000 mgr.x (mgr.24770) 6 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Client ('192.168.123.107', 57554) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:25 vm05 bash[17453]: cluster 2026-03-10T11:42:24.281390+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e33: x(active, since 1.8618s) 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:25 vm05 bash[17453]: cephadm 2026-03-10T11:42:24.601941+0000 mgr.x (mgr.24770) 2 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Bus STARTING 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:25 vm05 bash[17453]: cephadm 2026-03-10T11:42:24.703204+0000 mgr.x (mgr.24770) 3 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:25 vm05 bash[17453]: cephadm 2026-03-10T11:42:24.811569+0000 mgr.x (mgr.24770) 4 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:25 vm05 bash[17453]: cephadm 2026-03-10T11:42:24.811636+0000 mgr.x (mgr.24770) 5 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Bus STARTED 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:25 vm05 bash[17453]: cephadm 2026-03-10T11:42:24.811859+0000 mgr.x (mgr.24770) 6 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Client ('192.168.123.107', 57554) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.246+0000 7f259e80e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.290+0000 7f259e80e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.326+0000 7f259e80e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:42:25.496 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.458+0000 7f259e80e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:42:25.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:25 vm07 bash[17804]: cluster 2026-03-10T11:42:24.281390+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e33: x(active, since 1.8618s) 2026-03-10T11:42:25.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:25 vm07 bash[17804]: cephadm 2026-03-10T11:42:24.601941+0000 mgr.x (mgr.24770) 2 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Bus STARTING 2026-03-10T11:42:25.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:25 vm07 bash[17804]: cephadm 2026-03-10T11:42:24.703204+0000 mgr.x (mgr.24770) 3 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Serving on http://192.168.123.107:8765 2026-03-10T11:42:25.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:25 vm07 bash[17804]: cephadm 2026-03-10T11:42:24.811569+0000 mgr.x (mgr.24770) 4 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Serving on https://192.168.123.107:7150 2026-03-10T11:42:25.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:25 vm07 bash[17804]: cephadm 2026-03-10T11:42:24.811636+0000 mgr.x (mgr.24770) 5 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Bus STARTED 2026-03-10T11:42:25.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:25 vm07 bash[17804]: cephadm 2026-03-10T11:42:24.811859+0000 mgr.x (mgr.24770) 6 : cephadm [INF] [10/Mar/2026:11:42:24] ENGINE Client ('192.168.123.107', 57554) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:42:25.783 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.494+0000 7f259e80e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:42:25.783 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.530+0000 7f259e80e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:42:25.783 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.638+0000 7f259e80e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:42:26.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.782+0000 7f259e80e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:42:26.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.950+0000 7f259e80e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:42:26.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:25 vm05 bash[53899]: debug 2026-03-10T11:42:25.986+0000 7f259e80e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:42:26.092 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: debug 2026-03-10T11:42:26.026+0000 7f259e80e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:26 vm05 bash[22470]: cluster 2026-03-10T11:42:25.263326+0000 mgr.x (mgr.24770) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:26 vm05 bash[17453]: cluster 2026-03-10T11:42:25.263326+0000 mgr.x (mgr.24770) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: debug 2026-03-10T11:42:26.170+0000 7f259e80e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: debug 2026-03-10T11:42:26.394+0000 7f259e80e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: [10/Mar/2026:11:42:26] ENGINE Bus STARTING 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: CherryPy Checker: 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: The Application mounted at '' has an empty config. 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: [10/Mar/2026:11:42:26] ENGINE Serving on http://:::9283 2026-03-10T11:42:26.592 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:26 vm05 bash[53899]: [10/Mar/2026:11:42:26] ENGINE Bus STARTED 2026-03-10T11:42:26.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:26 vm07 bash[17804]: cluster 2026-03-10T11:42:25.263326+0000 mgr.x (mgr.24770) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:27 vm05 bash[22470]: cluster 2026-03-10T11:42:26.301238+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e34: x(active, since 3s) 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:27 vm05 bash[22470]: cluster 2026-03-10T11:42:26.398434+0000 mon.a (mon.0) 992 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:27 vm05 bash[22470]: audit 2026-03-10T11:42:26.401283+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:27 vm05 bash[22470]: audit 2026-03-10T11:42:26.401904+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:27 vm05 bash[22470]: audit 2026-03-10T11:42:26.402739+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:27 vm05 bash[22470]: audit 2026-03-10T11:42:26.403268+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:27 vm05 bash[17453]: cluster 2026-03-10T11:42:26.301238+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e34: x(active, since 3s) 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:27 vm05 bash[17453]: cluster 2026-03-10T11:42:26.398434+0000 mon.a (mon.0) 992 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:27 vm05 bash[17453]: audit 2026-03-10T11:42:26.401283+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:27 vm05 bash[17453]: audit 2026-03-10T11:42:26.401904+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:27 vm05 bash[17453]: audit 2026-03-10T11:42:26.402739+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:42:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:27 vm05 bash[17453]: audit 2026-03-10T11:42:26.403268+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:42:27.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:27 vm07 bash[17804]: cluster 2026-03-10T11:42:26.301238+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e34: x(active, since 3s) 2026-03-10T11:42:27.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:27 vm07 bash[17804]: cluster 2026-03-10T11:42:26.398434+0000 mon.a (mon.0) 992 : cluster [DBG] Standby manager daemon y started 2026-03-10T11:42:27.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:27 vm07 bash[17804]: audit 2026-03-10T11:42:26.401283+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-10T11:42:27.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:27 vm07 bash[17804]: audit 2026-03-10T11:42:26.401904+0000 mon.c (mon.1) 56 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:42:27.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:27 vm07 bash[17804]: audit 2026-03-10T11:42:26.402739+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-10T11:42:27.642 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:27 vm07 bash[17804]: audit 2026-03-10T11:42:26.403268+0000 mon.c (mon.1) 58 : audit [DBG] from='mgr.? 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:42:27.945 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:27 vm07 bash[40852]: ts=2026-03-10T11:42:27.644Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:42:27.945 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:27 vm07 bash[40852]: ts=2026-03-10T11:42:27.645Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:42:27.945 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:27 vm07 bash[40852]: ts=2026-03-10T11:42:27.645Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:42:27.945 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:27 vm07 bash[40852]: ts=2026-03-10T11:42:27.647Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:42:27.945 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:27 vm07 bash[40852]: ts=2026-03-10T11:42:27.647Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:42:27.945 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:27 vm07 bash[40852]: ts=2026-03-10T11:42:27.647Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:28 vm05 bash[22470]: audit 2026-03-10T11:42:27.223824+0000 mgr.x (mgr.24770) 8 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:28 vm05 bash[22470]: cluster 2026-03-10T11:42:27.263661+0000 mgr.x (mgr.24770) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:28 vm05 bash[22470]: cluster 2026-03-10T11:42:27.324926+0000 mon.a (mon.0) 993 : cluster [DBG] mgrmap e35: x(active, since 4s), standbys: y 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:28 vm05 bash[22470]: audit 2026-03-10T11:42:27.328261+0000 mon.b (mon.2) 236 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:28 vm05 bash[17453]: audit 2026-03-10T11:42:27.223824+0000 mgr.x (mgr.24770) 8 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:28 vm05 bash[17453]: cluster 2026-03-10T11:42:27.263661+0000 mgr.x (mgr.24770) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:28 vm05 bash[17453]: cluster 2026-03-10T11:42:27.324926+0000 mon.a (mon.0) 993 : cluster [DBG] mgrmap e35: x(active, since 4s), standbys: y 2026-03-10T11:42:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:28 vm05 bash[17453]: audit 2026-03-10T11:42:27.328261+0000 mon.b (mon.2) 236 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:42:28.694 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:28 vm07 bash[17804]: audit 2026-03-10T11:42:27.223824+0000 mgr.x (mgr.24770) 8 : audit [DBG] from='client.24928 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:28.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:28 vm07 bash[17804]: cluster 2026-03-10T11:42:27.263661+0000 mgr.x (mgr.24770) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:28.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:28 vm07 bash[17804]: cluster 2026-03-10T11:42:27.324926+0000 mon.a (mon.0) 993 : cluster [DBG] mgrmap e35: x(active, since 4s), standbys: y 2026-03-10T11:42:28.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:28 vm07 bash[17804]: audit 2026-03-10T11:42:27.328261+0000 mon.b (mon.2) 236 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:42:29.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:42:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:42:28] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: cluster 2026-03-10T11:42:29.263970+0000 mgr.x (mgr.24770) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:29.476076+0000 mon.a (mon.0) 994 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:29.563397+0000 mon.a (mon.0) 995 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:29.571767+0000 mon.a (mon.0) 996 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:29.579408+0000 mon.a (mon.0) 997 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.136134+0000 mon.a (mon.0) 998 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.142669+0000 mon.a (mon.0) 999 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.144151+0000 mon.b (mon.2) 237 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.146531+0000 mon.a (mon.0) 1000 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.147333+0000 mon.a (mon.0) 1001 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.153595+0000 mon.a (mon.0) 1002 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.154853+0000 mon.b (mon.2) 238 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.155487+0000 mon.a (mon.0) 1003 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.155958+0000 mon.b (mon.2) 239 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.156614+0000 mon.b (mon.2) 240 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.157461+0000 mgr.x (mgr.24770) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.157612+0000 mgr.x (mgr.24770) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.304145+0000 mon.a (mon.0) 1004 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.311266+0000 mon.a (mon.0) 1005 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.318013+0000 mon.a (mon.0) 1006 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.324467+0000 mon.a (mon.0) 1007 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.329681+0000 mon.a (mon.0) 1008 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.340403+0000 mon.b (mon.2) 241 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.341314+0000 mon.a (mon.0) 1009 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:30 vm05 bash[22470]: audit 2026-03-10T11:42:30.344608+0000 mon.b (mon.2) 242 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:30.725 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: cluster 2026-03-10T11:42:29.263970+0000 mgr.x (mgr.24770) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:29.476076+0000 mon.a (mon.0) 994 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:29.563397+0000 mon.a (mon.0) 995 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:29.571767+0000 mon.a (mon.0) 996 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:29.579408+0000 mon.a (mon.0) 997 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.136134+0000 mon.a (mon.0) 998 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.142669+0000 mon.a (mon.0) 999 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.144151+0000 mon.b (mon.2) 237 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.146531+0000 mon.a (mon.0) 1000 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.147333+0000 mon.a (mon.0) 1001 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.153595+0000 mon.a (mon.0) 1002 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.154853+0000 mon.b (mon.2) 238 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.155487+0000 mon.a (mon.0) 1003 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.155958+0000 mon.b (mon.2) 239 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.156614+0000 mon.b (mon.2) 240 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.157461+0000 mgr.x (mgr.24770) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.157612+0000 mgr.x (mgr.24770) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.304145+0000 mon.a (mon.0) 1004 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.311266+0000 mon.a (mon.0) 1005 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.318013+0000 mon.a (mon.0) 1006 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.324467+0000 mon.a (mon.0) 1007 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.329681+0000 mon.a (mon.0) 1008 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.340403+0000 mon.b (mon.2) 241 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.341314+0000 mon.a (mon.0) 1009 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:42:30.726 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:30 vm05 bash[17453]: audit 2026-03-10T11:42:30.344608+0000 mon.b (mon.2) 242 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: cluster 2026-03-10T11:42:29.263970+0000 mgr.x (mgr.24770) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 95 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:29.476076+0000 mon.a (mon.0) 994 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:29.563397+0000 mon.a (mon.0) 995 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:29.571767+0000 mon.a (mon.0) 996 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:29.579408+0000 mon.a (mon.0) 997 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.136134+0000 mon.a (mon.0) 998 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.142669+0000 mon.a (mon.0) 999 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.144151+0000 mon.b (mon.2) 237 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.146531+0000 mon.a (mon.0) 1000 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.147333+0000 mon.a (mon.0) 1001 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.153595+0000 mon.a (mon.0) 1002 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.154853+0000 mon.b (mon.2) 238 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.155487+0000 mon.a (mon.0) 1003 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.155958+0000 mon.b (mon.2) 239 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.156614+0000 mon.b (mon.2) 240 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.157461+0000 mgr.x (mgr.24770) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.157612+0000 mgr.x (mgr.24770) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.304145+0000 mon.a (mon.0) 1004 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.311266+0000 mon.a (mon.0) 1005 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.318013+0000 mon.a (mon.0) 1006 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.324467+0000 mon.a (mon.0) 1007 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.329681+0000 mon.a (mon.0) 1008 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.340403+0000 mon.b (mon.2) 241 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.341314+0000 mon.a (mon.0) 1009 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:42:30.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:30 vm07 bash[17804]: audit 2026-03-10T11:42:30.344608+0000 mon.b (mon.2) 242 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 systemd[1]: Stopping Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.478Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.480Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.480Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[40852]: ts=2026-03-10T11:42:31.480Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42034]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus-a 2026-03-10T11:42:31.539 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service: Deactivated successfully. 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 systemd[1]: Stopped Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 systemd[1]: Started Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.700Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.700Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.700Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.700Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.700Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.706Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.706Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.708Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.708Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.382µs 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.708Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.713Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.713Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.718Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=4 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.735Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=4 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.744Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=4 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.747Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=4 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.747Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=4 maxSegment=4 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.747Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=25.809µs wal_replay_duration=38.956029ms wbl_replay_duration=120ns total_replay_duration=39.40537ms 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.750Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.750Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.750Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.771Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=21.106281ms db_storage=742ns remote_storage=1.402µs web_handler=432ns query_engine=871ns scrape=737.638µs scrape_sd=233.296µs notify=11.261µs notify_sd=6.472µs rules=19.548421ms tracing=5.04µs 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.772Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T11:42:31.855 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:42:31 vm07 bash[42110]: ts=2026-03-10T11:42:31.772Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.194905+0000 mgr.x (mgr.24770) 13 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.196333+0000 mgr.x (mgr.24770) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.229394+0000 mgr.x (mgr.24770) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.229484+0000 mgr.x (mgr.24770) 16 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.265305+0000 mgr.x (mgr.24770) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.271096+0000 mgr.x (mgr.24770) 18 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.340101+0000 mgr.x (mgr.24770) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.345507+0000 mgr.x (mgr.24770) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:30.857800+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:30.863606+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:30.865001+0000 mgr.x (mgr.24770) 21 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: cephadm 2026-03-10T11:42:31.019225+0000 mgr.x (mgr.24770) 22 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.328807+0000 mon.a (mon.0) 1012 : audit [DBG] from='client.? 192.168.123.105:0/2436872321' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.525543+0000 mon.c (mon.1) 59 : audit [INF] from='client.? 192.168.123.105:0/1422383888' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]: dispatch 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.526239+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]: dispatch 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.560857+0000 mon.a (mon.0) 1014 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.567631+0000 mon.a (mon.0) 1015 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.570211+0000 mon.b (mon.2) 243 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.572678+0000 mon.b (mon.2) 244 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.575895+0000 mon.b (mon.2) 245 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:42:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:31 vm07 bash[17804]: audit 2026-03-10T11:42:31.608463+0000 mon.b (mon.2) 246 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.194905+0000 mgr.x (mgr.24770) 13 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.196333+0000 mgr.x (mgr.24770) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.229394+0000 mgr.x (mgr.24770) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.229484+0000 mgr.x (mgr.24770) 16 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.265305+0000 mgr.x (mgr.24770) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.271096+0000 mgr.x (mgr.24770) 18 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.340101+0000 mgr.x (mgr.24770) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.345507+0000 mgr.x (mgr.24770) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:30.857800+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:30.863606+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:30.865001+0000 mgr.x (mgr.24770) 21 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:42:32.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: cephadm 2026-03-10T11:42:31.019225+0000 mgr.x (mgr.24770) 22 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.328807+0000 mon.a (mon.0) 1012 : audit [DBG] from='client.? 192.168.123.105:0/2436872321' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.525543+0000 mon.c (mon.1) 59 : audit [INF] from='client.? 192.168.123.105:0/1422383888' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.526239+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.560857+0000 mon.a (mon.0) 1014 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.567631+0000 mon.a (mon.0) 1015 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.570211+0000 mon.b (mon.2) 243 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.572678+0000 mon.b (mon.2) 244 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.575895+0000 mon.b (mon.2) 245 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:31 vm05 bash[17453]: audit 2026-03-10T11:42:31.608463+0000 mon.b (mon.2) 246 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.194905+0000 mgr.x (mgr.24770) 13 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.196333+0000 mgr.x (mgr.24770) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.229394+0000 mgr.x (mgr.24770) 15 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.229484+0000 mgr.x (mgr.24770) 16 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.265305+0000 mgr.x (mgr.24770) 17 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.271096+0000 mgr.x (mgr.24770) 18 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.340101+0000 mgr.x (mgr.24770) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.345507+0000 mgr.x (mgr.24770) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:30.857800+0000 mon.a (mon.0) 1010 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:30.863606+0000 mon.a (mon.0) 1011 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:30.865001+0000 mgr.x (mgr.24770) 21 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: cephadm 2026-03-10T11:42:31.019225+0000 mgr.x (mgr.24770) 22 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.328807+0000 mon.a (mon.0) 1012 : audit [DBG] from='client.? 192.168.123.105:0/2436872321' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.525543+0000 mon.c (mon.1) 59 : audit [INF] from='client.? 192.168.123.105:0/1422383888' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.526239+0000 mon.a (mon.0) 1013 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.560857+0000 mon.a (mon.0) 1014 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.567631+0000 mon.a (mon.0) 1015 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.570211+0000 mon.b (mon.2) 243 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.572678+0000 mon.b (mon.2) 244 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.575895+0000 mon.b (mon.2) 245 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:42:32.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:31 vm05 bash[22470]: audit 2026-03-10T11:42:31.608463+0000 mon.b (mon.2) 246 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:42:33.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: cluster 2026-03-10T11:42:31.264519+0000 mgr.x (mgr.24770) 23 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: audit 2026-03-10T11:42:31.570599+0000 mgr.x (mgr.24770) 24 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: audit 2026-03-10T11:42:31.572885+0000 mgr.x (mgr.24770) 25 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: audit 2026-03-10T11:42:31.576089+0000 mgr.x (mgr.24770) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: audit 2026-03-10T11:42:31.875502+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]': finished 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: cluster 2026-03-10T11:42:31.875652+0000 mon.a (mon.0) 1017 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: audit 2026-03-10T11:42:32.062893+0000 mon.c (mon.1) 60 : audit [INF] from='client.? 192.168.123.105:0/1331350399' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]: dispatch 2026-03-10T11:42:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:32 vm07 bash[17804]: audit 2026-03-10T11:42:32.063301+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: cluster 2026-03-10T11:42:31.264519+0000 mgr.x (mgr.24770) 23 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: audit 2026-03-10T11:42:31.570599+0000 mgr.x (mgr.24770) 24 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: audit 2026-03-10T11:42:31.572885+0000 mgr.x (mgr.24770) 25 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: audit 2026-03-10T11:42:31.576089+0000 mgr.x (mgr.24770) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: audit 2026-03-10T11:42:31.875502+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]': finished 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: cluster 2026-03-10T11:42:31.875652+0000 mon.a (mon.0) 1017 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: audit 2026-03-10T11:42:32.062893+0000 mon.c (mon.1) 60 : audit [INF] from='client.? 192.168.123.105:0/1331350399' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:32 vm05 bash[22470]: audit 2026-03-10T11:42:32.063301+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: cluster 2026-03-10T11:42:31.264519+0000 mgr.x (mgr.24770) 23 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: audit 2026-03-10T11:42:31.570599+0000 mgr.x (mgr.24770) 24 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: audit 2026-03-10T11:42:31.572885+0000 mgr.x (mgr.24770) 25 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:42:33.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: audit 2026-03-10T11:42:31.576089+0000 mgr.x (mgr.24770) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:42:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: audit 2026-03-10T11:42:31.875502+0000 mon.a (mon.0) 1016 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2575235517"}]': finished 2026-03-10T11:42:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: cluster 2026-03-10T11:42:31.875652+0000 mon.a (mon.0) 1017 : cluster [DBG] osdmap e92: 8 total, 8 up, 8 in 2026-03-10T11:42:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: audit 2026-03-10T11:42:32.062893+0000 mon.c (mon.1) 60 : audit [INF] from='client.? 192.168.123.105:0/1331350399' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]: dispatch 2026-03-10T11:42:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:32 vm05 bash[17453]: audit 2026-03-10T11:42:32.063301+0000 mon.a (mon.0) 1018 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]: dispatch 2026-03-10T11:42:34.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:33 vm07 bash[17804]: audit 2026-03-10T11:42:32.882245+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]': finished 2026-03-10T11:42:34.194 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:33 vm07 bash[17804]: cluster 2026-03-10T11:42:32.882396+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T11:42:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:33 vm07 bash[17804]: audit 2026-03-10T11:42:33.089998+0000 mon.c (mon.1) 61 : audit [INF] from='client.? 192.168.123.105:0/549678483' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]: dispatch 2026-03-10T11:42:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:33 vm07 bash[17804]: audit 2026-03-10T11:42:33.090338+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]: dispatch 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:33 vm05 bash[22470]: audit 2026-03-10T11:42:32.882245+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]': finished 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:33 vm05 bash[22470]: cluster 2026-03-10T11:42:32.882396+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:33 vm05 bash[22470]: audit 2026-03-10T11:42:33.089998+0000 mon.c (mon.1) 61 : audit [INF] from='client.? 192.168.123.105:0/549678483' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]: dispatch 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:33 vm05 bash[22470]: audit 2026-03-10T11:42:33.090338+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]: dispatch 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:33 vm05 bash[17453]: audit 2026-03-10T11:42:32.882245+0000 mon.a (mon.0) 1019 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3952505744"}]': finished 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:33 vm05 bash[17453]: cluster 2026-03-10T11:42:32.882396+0000 mon.a (mon.0) 1020 : cluster [DBG] osdmap e93: 8 total, 8 up, 8 in 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:33 vm05 bash[17453]: audit 2026-03-10T11:42:33.089998+0000 mon.c (mon.1) 61 : audit [INF] from='client.? 192.168.123.105:0/549678483' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]: dispatch 2026-03-10T11:42:34.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:33 vm05 bash[17453]: audit 2026-03-10T11:42:33.090338+0000 mon.a (mon.0) 1021 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]: dispatch 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:34 vm05 bash[22470]: cluster 2026-03-10T11:42:33.264802+0000 mgr.x (mgr.24770) 27 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:34 vm05 bash[22470]: audit 2026-03-10T11:42:33.981072+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]': finished 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:34 vm05 bash[22470]: cluster 2026-03-10T11:42:33.981124+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:34 vm05 bash[22470]: audit 2026-03-10T11:42:34.170258+0000 mon.b (mon.2) 247 : audit [INF] from='client.? 192.168.123.105:0/760431334' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]: dispatch 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:34 vm05 bash[22470]: audit 2026-03-10T11:42:34.171016+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]: dispatch 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:34 vm05 bash[17453]: cluster 2026-03-10T11:42:33.264802+0000 mgr.x (mgr.24770) 27 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:34 vm05 bash[17453]: audit 2026-03-10T11:42:33.981072+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]': finished 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:34 vm05 bash[17453]: cluster 2026-03-10T11:42:33.981124+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:34 vm05 bash[17453]: audit 2026-03-10T11:42:34.170258+0000 mon.b (mon.2) 247 : audit [INF] from='client.? 192.168.123.105:0/760431334' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]: dispatch 2026-03-10T11:42:35.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:34 vm05 bash[17453]: audit 2026-03-10T11:42:34.171016+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]: dispatch 2026-03-10T11:42:35.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:34 vm07 bash[17804]: cluster 2026-03-10T11:42:33.264802+0000 mgr.x (mgr.24770) 27 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-10T11:42:35.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:34 vm07 bash[17804]: audit 2026-03-10T11:42:33.981072+0000 mon.a (mon.0) 1022 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3952505744"}]': finished 2026-03-10T11:42:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:34 vm07 bash[17804]: cluster 2026-03-10T11:42:33.981124+0000 mon.a (mon.0) 1023 : cluster [DBG] osdmap e94: 8 total, 8 up, 8 in 2026-03-10T11:42:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:34 vm07 bash[17804]: audit 2026-03-10T11:42:34.170258+0000 mon.b (mon.2) 247 : audit [INF] from='client.? 192.168.123.105:0/760431334' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]: dispatch 2026-03-10T11:42:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:34 vm07 bash[17804]: audit 2026-03-10T11:42:34.171016+0000 mon.a (mon.0) 1024 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]: dispatch 2026-03-10T11:42:36.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:35 vm05 bash[22470]: audit 2026-03-10T11:42:34.990264+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]': finished 2026-03-10T11:42:36.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:35 vm05 bash[22470]: cluster 2026-03-10T11:42:34.990342+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T11:42:36.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:35 vm05 bash[22470]: audit 2026-03-10T11:42:35.187246+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? 192.168.123.105:0/3905729424' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1957796161"}]: dispatch 2026-03-10T11:42:36.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:35 vm05 bash[17453]: audit 2026-03-10T11:42:34.990264+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]': finished 2026-03-10T11:42:36.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:35 vm05 bash[17453]: cluster 2026-03-10T11:42:34.990342+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T11:42:36.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:35 vm05 bash[17453]: audit 2026-03-10T11:42:35.187246+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? 192.168.123.105:0/3905729424' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1957796161"}]: dispatch 2026-03-10T11:42:36.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:35 vm07 bash[17804]: audit 2026-03-10T11:42:34.990264+0000 mon.a (mon.0) 1025 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2502714718"}]': finished 2026-03-10T11:42:36.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:35 vm07 bash[17804]: cluster 2026-03-10T11:42:34.990342+0000 mon.a (mon.0) 1026 : cluster [DBG] osdmap e95: 8 total, 8 up, 8 in 2026-03-10T11:42:36.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:35 vm07 bash[17804]: audit 2026-03-10T11:42:35.187246+0000 mon.a (mon.0) 1027 : audit [INF] from='client.? 192.168.123.105:0/3905729424' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1957796161"}]: dispatch 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:36 vm05 bash[22470]: cluster 2026-03-10T11:42:35.265110+0000 mgr.x (mgr.24770) 28 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:36 vm05 bash[22470]: audit 2026-03-10T11:42:35.998883+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? 192.168.123.105:0/3905729424' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1957796161"}]': finished 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:36 vm05 bash[22470]: cluster 2026-03-10T11:42:35.998962+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:36 vm05 bash[22470]: audit 2026-03-10T11:42:36.191778+0000 mon.c (mon.1) 62 : audit [INF] from='client.? 192.168.123.105:0/3717858422' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]: dispatch 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:36 vm05 bash[22470]: audit 2026-03-10T11:42:36.192096+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]: dispatch 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.908168+0000 mon.a (mon.0) 1031 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.913621+0000 mon.a (mon.0) 1032 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.977532+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.982967+0000 mon.a (mon.0) 1034 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.984222+0000 mon.b (mon.2) 248 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.984927+0000 mon.b (mon.2) 249 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:37 vm05 bash[22470]: audit 2026-03-10T11:42:36.989584+0000 mon.a (mon.0) 1035 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: cluster 2026-03-10T11:42:35.265110+0000 mgr.x (mgr.24770) 28 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:35.998883+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? 192.168.123.105:0/3905729424' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1957796161"}]': finished 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: cluster 2026-03-10T11:42:35.998962+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.191778+0000 mon.c (mon.1) 62 : audit [INF] from='client.? 192.168.123.105:0/3717858422' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]: dispatch 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.192096+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]: dispatch 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.908168+0000 mon.a (mon.0) 1031 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.913621+0000 mon.a (mon.0) 1032 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.977532+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.982967+0000 mon.a (mon.0) 1034 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.984222+0000 mon.b (mon.2) 248 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.984927+0000 mon.b (mon.2) 249 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:42:37.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:36 vm05 bash[17453]: audit 2026-03-10T11:42:36.989584+0000 mon.a (mon.0) 1035 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:36 vm07 bash[17804]: cluster 2026-03-10T11:42:35.265110+0000 mgr.x (mgr.24770) 28 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:35.998883+0000 mon.a (mon.0) 1028 : audit [INF] from='client.? 192.168.123.105:0/3905729424' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1957796161"}]': finished 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: cluster 2026-03-10T11:42:35.998962+0000 mon.a (mon.0) 1029 : cluster [DBG] osdmap e96: 8 total, 8 up, 8 in 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.191778+0000 mon.c (mon.1) 62 : audit [INF] from='client.? 192.168.123.105:0/3717858422' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]: dispatch 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.192096+0000 mon.a (mon.0) 1030 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]: dispatch 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.908168+0000 mon.a (mon.0) 1031 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.913621+0000 mon.a (mon.0) 1032 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.977532+0000 mon.a (mon.0) 1033 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.982967+0000 mon.a (mon.0) 1034 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.984222+0000 mon.b (mon.2) 248 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.984927+0000 mon.b (mon.2) 249 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:42:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:37 vm07 bash[17804]: audit 2026-03-10T11:42:36.989584+0000 mon.a (mon.0) 1035 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:42:38.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:38 vm05 bash[22470]: audit 2026-03-10T11:42:37.021263+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]': finished 2026-03-10T11:42:38.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:38 vm05 bash[22470]: cluster 2026-03-10T11:42:37.021363+0000 mon.a (mon.0) 1037 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T11:42:38.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:38 vm05 bash[17453]: audit 2026-03-10T11:42:37.021263+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]': finished 2026-03-10T11:42:38.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:38 vm05 bash[17453]: cluster 2026-03-10T11:42:37.021363+0000 mon.a (mon.0) 1037 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T11:42:38.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:38 vm07 bash[17804]: audit 2026-03-10T11:42:37.021263+0000 mon.a (mon.0) 1036 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/2796425544"}]': finished 2026-03-10T11:42:38.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:38 vm07 bash[17804]: cluster 2026-03-10T11:42:37.021363+0000 mon.a (mon.0) 1037 : cluster [DBG] osdmap e97: 8 total, 8 up, 8 in 2026-03-10T11:42:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:39 vm05 bash[22470]: cluster 2026-03-10T11:42:37.265486+0000 mgr.x (mgr.24770) 29 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-10T11:42:39.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:39 vm05 bash[22470]: audit 2026-03-10T11:42:38.619880+0000 mon.b (mon.2) 250 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:39 vm05 bash[17453]: cluster 2026-03-10T11:42:37.265486+0000 mgr.x (mgr.24770) 29 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-10T11:42:39.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:39 vm05 bash[17453]: audit 2026-03-10T11:42:38.619880+0000 mon.b (mon.2) 250 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:39.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:39 vm07 bash[17804]: cluster 2026-03-10T11:42:37.265486+0000 mgr.x (mgr.24770) 29 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.7 KiB/s rd, 2 op/s 2026-03-10T11:42:39.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:39 vm07 bash[17804]: audit 2026-03-10T11:42:38.619880+0000 mon.b (mon.2) 250 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:41 vm05 bash[22470]: cluster 2026-03-10T11:42:39.265795+0000 mgr.x (mgr.24770) 30 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:42:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:41 vm05 bash[17453]: cluster 2026-03-10T11:42:39.265795+0000 mgr.x (mgr.24770) 30 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:42:41.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:41 vm07 bash[17804]: cluster 2026-03-10T11:42:39.265795+0000 mgr.x (mgr.24770) 30 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 96 MiB used, 160 GiB / 160 GiB avail; 1.3 KiB/s rd, 1 op/s 2026-03-10T11:42:42.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:42 vm05 bash[22470]: audit 2026-03-10T11:42:41.164513+0000 mgr.x (mgr.24770) 31 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:42.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:42 vm05 bash[17453]: audit 2026-03-10T11:42:41.164513+0000 mgr.x (mgr.24770) 31 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:42.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:42 vm07 bash[17804]: audit 2026-03-10T11:42:41.164513+0000 mgr.x (mgr.24770) 31 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:43.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:43 vm05 bash[22470]: cluster 2026-03-10T11:42:41.266266+0000 mgr.x (mgr.24770) 32 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 815 B/s rd, 0 op/s 2026-03-10T11:42:43.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:43 vm05 bash[17453]: cluster 2026-03-10T11:42:41.266266+0000 mgr.x (mgr.24770) 32 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 815 B/s rd, 0 op/s 2026-03-10T11:42:43.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:43 vm07 bash[17804]: cluster 2026-03-10T11:42:41.266266+0000 mgr.x (mgr.24770) 32 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 815 B/s rd, 0 op/s 2026-03-10T11:42:44.694 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:42:44] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-10T11:42:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:45 vm05 bash[22470]: cluster 2026-03-10T11:42:43.266528+0000 mgr.x (mgr.24770) 33 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:42:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:45 vm05 bash[17453]: cluster 2026-03-10T11:42:43.266528+0000 mgr.x (mgr.24770) 33 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:42:45.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:45 vm07 bash[17804]: cluster 2026-03-10T11:42:43.266528+0000 mgr.x (mgr.24770) 33 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:42:47.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:47 vm05 bash[22470]: cluster 2026-03-10T11:42:45.267065+0000 mgr.x (mgr.24770) 34 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:42:47.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:47 vm05 bash[17453]: cluster 2026-03-10T11:42:45.267065+0000 mgr.x (mgr.24770) 34 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:42:47.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:47 vm07 bash[17804]: cluster 2026-03-10T11:42:45.267065+0000 mgr.x (mgr.24770) 34 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:42:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:49 vm05 bash[22470]: cluster 2026-03-10T11:42:47.267393+0000 mgr.x (mgr.24770) 35 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 999 B/s rd, 0 op/s 2026-03-10T11:42:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:49 vm05 bash[17453]: cluster 2026-03-10T11:42:47.267393+0000 mgr.x (mgr.24770) 35 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 999 B/s rd, 0 op/s 2026-03-10T11:42:49.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:49 vm07 bash[17804]: cluster 2026-03-10T11:42:47.267393+0000 mgr.x (mgr.24770) 35 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 999 B/s rd, 0 op/s 2026-03-10T11:42:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:51 vm05 bash[22470]: cluster 2026-03-10T11:42:49.267690+0000 mgr.x (mgr.24770) 36 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:51 vm05 bash[17453]: cluster 2026-03-10T11:42:49.267690+0000 mgr.x (mgr.24770) 36 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:51 vm07 bash[17804]: cluster 2026-03-10T11:42:49.267690+0000 mgr.x (mgr.24770) 36 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:52.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:52 vm05 bash[22470]: audit 2026-03-10T11:42:51.174538+0000 mgr.x (mgr.24770) 37 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:52.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:52 vm05 bash[17453]: audit 2026-03-10T11:42:51.174538+0000 mgr.x (mgr.24770) 37 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:52.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:52 vm07 bash[17804]: audit 2026-03-10T11:42:51.174538+0000 mgr.x (mgr.24770) 37 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:42:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:53 vm05 bash[22470]: cluster 2026-03-10T11:42:51.268200+0000 mgr.x (mgr.24770) 38 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:53 vm05 bash[17453]: cluster 2026-03-10T11:42:51.268200+0000 mgr.x (mgr.24770) 38 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:53.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:53 vm07 bash[17804]: cluster 2026-03-10T11:42:51.268200+0000 mgr.x (mgr.24770) 38 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:54 vm05 bash[22470]: audit 2026-03-10T11:42:53.619993+0000 mon.b (mon.2) 251 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:54 vm05 bash[17453]: audit 2026-03-10T11:42:53.619993+0000 mon.b (mon.2) 251 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:54.408 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:54 vm07 bash[17804]: audit 2026-03-10T11:42:53.619993+0000 mon.b (mon.2) 251 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:42:54.694 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:42:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:42:54] "GET /metrics HTTP/1.1" 200 37549 "" "Prometheus/2.51.0" 2026-03-10T11:42:55.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:55 vm05 bash[22470]: cluster 2026-03-10T11:42:53.268515+0000 mgr.x (mgr.24770) 39 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:55.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:55 vm05 bash[17453]: cluster 2026-03-10T11:42:53.268515+0000 mgr.x (mgr.24770) 39 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:55.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:55 vm07 bash[17804]: cluster 2026-03-10T11:42:53.268515+0000 mgr.x (mgr.24770) 39 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:57 vm05 bash[22470]: cluster 2026-03-10T11:42:55.269022+0000 mgr.x (mgr.24770) 40 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:57 vm05 bash[17453]: cluster 2026-03-10T11:42:55.269022+0000 mgr.x (mgr.24770) 40 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:57.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:57 vm07 bash[17804]: cluster 2026-03-10T11:42:55.269022+0000 mgr.x (mgr.24770) 40 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:42:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:42:59 vm05 bash[22470]: cluster 2026-03-10T11:42:57.269385+0000 mgr.x (mgr.24770) 41 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:42:59 vm05 bash[17453]: cluster 2026-03-10T11:42:57.269385+0000 mgr.x (mgr.24770) 41 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:42:59.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:42:59 vm07 bash[17804]: cluster 2026-03-10T11:42:57.269385+0000 mgr.x (mgr.24770) 41 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:01 vm05 bash[22470]: cluster 2026-03-10T11:42:59.269666+0000 mgr.x (mgr.24770) 42 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:01 vm05 bash[17453]: cluster 2026-03-10T11:42:59.269666+0000 mgr.x (mgr.24770) 42 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:01.433 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:01 vm07 bash[17804]: cluster 2026-03-10T11:42:59.269666+0000 mgr.x (mgr.24770) 42 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:02.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:02 vm07 bash[17804]: audit 2026-03-10T11:43:01.183612+0000 mgr.x (mgr.24770) 43 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:02.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:02 vm05 bash[22470]: audit 2026-03-10T11:43:01.183612+0000 mgr.x (mgr.24770) 43 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:02.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:02 vm05 bash[17453]: audit 2026-03-10T11:43:01.183612+0000 mgr.x (mgr.24770) 43 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:03.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:03 vm07 bash[17804]: cluster 2026-03-10T11:43:01.270114+0000 mgr.x (mgr.24770) 44 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:03.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:03 vm05 bash[22470]: cluster 2026-03-10T11:43:01.270114+0000 mgr.x (mgr.24770) 44 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:03.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:03 vm05 bash[17453]: cluster 2026-03-10T11:43:01.270114+0000 mgr.x (mgr.24770) 44 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:04.694 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:43:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:43:04] "GET /metrics HTTP/1.1" 200 37549 "" "Prometheus/2.51.0" 2026-03-10T11:43:05.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:05 vm07 bash[17804]: cluster 2026-03-10T11:43:03.270364+0000 mgr.x (mgr.24770) 45 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:05.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:05 vm05 bash[22470]: cluster 2026-03-10T11:43:03.270364+0000 mgr.x (mgr.24770) 45 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:05.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:05 vm05 bash[17453]: cluster 2026-03-10T11:43:03.270364+0000 mgr.x (mgr.24770) 45 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:07.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:07 vm07 bash[17804]: cluster 2026-03-10T11:43:05.270841+0000 mgr.x (mgr.24770) 46 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:07.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:07 vm05 bash[22470]: cluster 2026-03-10T11:43:05.270841+0000 mgr.x (mgr.24770) 46 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:07.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:07 vm05 bash[17453]: cluster 2026-03-10T11:43:05.270841+0000 mgr.x (mgr.24770) 46 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:09.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:09 vm07 bash[17804]: cluster 2026-03-10T11:43:07.271178+0000 mgr.x (mgr.24770) 47 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:09 vm07 bash[17804]: audit 2026-03-10T11:43:08.620171+0000 mon.b (mon.2) 252 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:09.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:09 vm05 bash[22470]: cluster 2026-03-10T11:43:07.271178+0000 mgr.x (mgr.24770) 47 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:09.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:09 vm05 bash[22470]: audit 2026-03-10T11:43:08.620171+0000 mon.b (mon.2) 252 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:09.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:09 vm05 bash[17453]: cluster 2026-03-10T11:43:07.271178+0000 mgr.x (mgr.24770) 47 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:09.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:09 vm05 bash[17453]: audit 2026-03-10T11:43:08.620171+0000 mon.b (mon.2) 252 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:11.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:11 vm07 bash[17804]: cluster 2026-03-10T11:43:09.271515+0000 mgr.x (mgr.24770) 48 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:11.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:11 vm05 bash[22470]: cluster 2026-03-10T11:43:09.271515+0000 mgr.x (mgr.24770) 48 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:11.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:11 vm05 bash[17453]: cluster 2026-03-10T11:43:09.271515+0000 mgr.x (mgr.24770) 48 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:12.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:12 vm07 bash[17804]: audit 2026-03-10T11:43:11.193678+0000 mgr.x (mgr.24770) 49 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:12.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:12 vm05 bash[22470]: audit 2026-03-10T11:43:11.193678+0000 mgr.x (mgr.24770) 49 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:12.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:12 vm05 bash[17453]: audit 2026-03-10T11:43:11.193678+0000 mgr.x (mgr.24770) 49 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:13 vm07 bash[17804]: cluster 2026-03-10T11:43:11.271965+0000 mgr.x (mgr.24770) 50 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:13.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:13 vm05 bash[22470]: cluster 2026-03-10T11:43:11.271965+0000 mgr.x (mgr.24770) 50 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:13.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:13 vm05 bash[17453]: cluster 2026-03-10T11:43:11.271965+0000 mgr.x (mgr.24770) 50 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:14.694 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:43:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:43:14] "GET /metrics HTTP/1.1" 200 37542 "" "Prometheus/2.51.0" 2026-03-10T11:43:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:15 vm07 bash[17804]: cluster 2026-03-10T11:43:13.272278+0000 mgr.x (mgr.24770) 51 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:15.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:15 vm05 bash[22470]: cluster 2026-03-10T11:43:13.272278+0000 mgr.x (mgr.24770) 51 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:15.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:15 vm05 bash[17453]: cluster 2026-03-10T11:43:13.272278+0000 mgr.x (mgr.24770) 51 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:17.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:17 vm07 bash[17804]: cluster 2026-03-10T11:43:15.272783+0000 mgr.x (mgr.24770) 52 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:17.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:17 vm05 bash[22470]: cluster 2026-03-10T11:43:15.272783+0000 mgr.x (mgr.24770) 52 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:17.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:17 vm05 bash[17453]: cluster 2026-03-10T11:43:15.272783+0000 mgr.x (mgr.24770) 52 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:19.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:19 vm07 bash[17804]: cluster 2026-03-10T11:43:17.273090+0000 mgr.x (mgr.24770) 53 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:19.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:19 vm05 bash[22470]: cluster 2026-03-10T11:43:17.273090+0000 mgr.x (mgr.24770) 53 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:19.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:19 vm05 bash[17453]: cluster 2026-03-10T11:43:17.273090+0000 mgr.x (mgr.24770) 53 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:21 vm07 bash[17804]: cluster 2026-03-10T11:43:19.273392+0000 mgr.x (mgr.24770) 54 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:21 vm05 bash[22470]: cluster 2026-03-10T11:43:19.273392+0000 mgr.x (mgr.24770) 54 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:21.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:21 vm05 bash[17453]: cluster 2026-03-10T11:43:19.273392+0000 mgr.x (mgr.24770) 54 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:23 vm07 bash[17804]: audit 2026-03-10T11:43:21.203350+0000 mgr.x (mgr.24770) 55 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:23 vm07 bash[17804]: cluster 2026-03-10T11:43:21.273867+0000 mgr.x (mgr.24770) 56 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:23 vm05 bash[22470]: audit 2026-03-10T11:43:21.203350+0000 mgr.x (mgr.24770) 55 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:23 vm05 bash[22470]: cluster 2026-03-10T11:43:21.273867+0000 mgr.x (mgr.24770) 56 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:23 vm05 bash[17453]: audit 2026-03-10T11:43:21.203350+0000 mgr.x (mgr.24770) 55 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:23 vm05 bash[17453]: cluster 2026-03-10T11:43:21.273867+0000 mgr.x (mgr.24770) 56 : cluster [DBG] pgmap v38: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:24.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:24 vm07 bash[17804]: audit 2026-03-10T11:43:23.621595+0000 mon.b (mon.2) 253 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:24.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:43:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:43:24] "GET /metrics HTTP/1.1" 200 37544 "" "Prometheus/2.51.0" 2026-03-10T11:43:24.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:24 vm05 bash[22470]: audit 2026-03-10T11:43:23.621595+0000 mon.b (mon.2) 253 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:24.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:24 vm05 bash[17453]: audit 2026-03-10T11:43:23.621595+0000 mon.b (mon.2) 253 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:25 vm07 bash[17804]: cluster 2026-03-10T11:43:23.274144+0000 mgr.x (mgr.24770) 57 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:25.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:25 vm05 bash[22470]: cluster 2026-03-10T11:43:23.274144+0000 mgr.x (mgr.24770) 57 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:25.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:25 vm05 bash[17453]: cluster 2026-03-10T11:43:23.274144+0000 mgr.x (mgr.24770) 57 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:27.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:27 vm07 bash[17804]: cluster 2026-03-10T11:43:25.274729+0000 mgr.x (mgr.24770) 58 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:27 vm05 bash[22470]: cluster 2026-03-10T11:43:25.274729+0000 mgr.x (mgr.24770) 58 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:27 vm05 bash[17453]: cluster 2026-03-10T11:43:25.274729+0000 mgr.x (mgr.24770) 58 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:29 vm07 bash[17804]: cluster 2026-03-10T11:43:27.275018+0000 mgr.x (mgr.24770) 59 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:29.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:29 vm05 bash[22470]: cluster 2026-03-10T11:43:27.275018+0000 mgr.x (mgr.24770) 59 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:29 vm05 bash[17453]: cluster 2026-03-10T11:43:27.275018+0000 mgr.x (mgr.24770) 59 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:31.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:31 vm07 bash[17804]: cluster 2026-03-10T11:43:29.275322+0000 mgr.x (mgr.24770) 60 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:31.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:31 vm05 bash[22470]: cluster 2026-03-10T11:43:29.275322+0000 mgr.x (mgr.24770) 60 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:31.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:31 vm05 bash[17453]: cluster 2026-03-10T11:43:29.275322+0000 mgr.x (mgr.24770) 60 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:33 vm07 bash[17804]: audit 2026-03-10T11:43:31.213929+0000 mgr.x (mgr.24770) 61 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:33 vm07 bash[17804]: cluster 2026-03-10T11:43:31.275806+0000 mgr.x (mgr.24770) 62 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:33 vm05 bash[22470]: audit 2026-03-10T11:43:31.213929+0000 mgr.x (mgr.24770) 61 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:33 vm05 bash[22470]: cluster 2026-03-10T11:43:31.275806+0000 mgr.x (mgr.24770) 62 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:33 vm05 bash[17453]: audit 2026-03-10T11:43:31.213929+0000 mgr.x (mgr.24770) 61 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:33 vm05 bash[17453]: cluster 2026-03-10T11:43:31.275806+0000 mgr.x (mgr.24770) 62 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:43:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:43:34] "GET /metrics HTTP/1.1" 200 37544 "" "Prometheus/2.51.0" 2026-03-10T11:43:35.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:35 vm05 bash[22470]: cluster 2026-03-10T11:43:33.276075+0000 mgr.x (mgr.24770) 63 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:35.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:35 vm05 bash[17453]: cluster 2026-03-10T11:43:33.276075+0000 mgr.x (mgr.24770) 63 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:35 vm07 bash[17804]: cluster 2026-03-10T11:43:33.276075+0000 mgr.x (mgr.24770) 63 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:36 vm05 bash[22470]: cluster 2026-03-10T11:43:35.276591+0000 mgr.x (mgr.24770) 64 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:36 vm05 bash[17453]: cluster 2026-03-10T11:43:35.276591+0000 mgr.x (mgr.24770) 64 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:36.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:36 vm07 bash[17804]: cluster 2026-03-10T11:43:35.276591+0000 mgr.x (mgr.24770) 64 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:37.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:37 vm05 bash[22470]: audit 2026-03-10T11:43:37.037918+0000 mon.b (mon.2) 254 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:43:37.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:37 vm05 bash[17453]: audit 2026-03-10T11:43:37.037918+0000 mon.b (mon.2) 254 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:43:37.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:37 vm07 bash[17804]: audit 2026-03-10T11:43:37.037918+0000 mon.b (mon.2) 254 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:38 vm05 bash[22470]: cluster 2026-03-10T11:43:37.276895+0000 mgr.x (mgr.24770) 65 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:38 vm05 bash[22470]: audit 2026-03-10T11:43:37.351811+0000 mon.b (mon.2) 255 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:38 vm05 bash[22470]: audit 2026-03-10T11:43:37.352635+0000 mon.b (mon.2) 256 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:38 vm05 bash[22470]: audit 2026-03-10T11:43:37.361350+0000 mon.a (mon.0) 1038 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:38 vm05 bash[17453]: cluster 2026-03-10T11:43:37.276895+0000 mgr.x (mgr.24770) 65 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:38 vm05 bash[17453]: audit 2026-03-10T11:43:37.351811+0000 mon.b (mon.2) 255 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:38 vm05 bash[17453]: audit 2026-03-10T11:43:37.352635+0000 mon.b (mon.2) 256 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:43:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:38 vm05 bash[17453]: audit 2026-03-10T11:43:37.361350+0000 mon.a (mon.0) 1038 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:43:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:38 vm07 bash[17804]: cluster 2026-03-10T11:43:37.276895+0000 mgr.x (mgr.24770) 65 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:38 vm07 bash[17804]: audit 2026-03-10T11:43:37.351811+0000 mon.b (mon.2) 255 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:43:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:38 vm07 bash[17804]: audit 2026-03-10T11:43:37.352635+0000 mon.b (mon.2) 256 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:43:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:38 vm07 bash[17804]: audit 2026-03-10T11:43:37.361350+0000 mon.a (mon.0) 1038 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:43:39.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:39 vm05 bash[22470]: audit 2026-03-10T11:43:38.621702+0000 mon.b (mon.2) 257 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:39.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:39 vm05 bash[17453]: audit 2026-03-10T11:43:38.621702+0000 mon.b (mon.2) 257 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:39.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:39 vm07 bash[17804]: audit 2026-03-10T11:43:38.621702+0000 mon.b (mon.2) 257 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:40.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:40 vm05 bash[22470]: cluster 2026-03-10T11:43:39.277252+0000 mgr.x (mgr.24770) 66 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:40.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:40 vm05 bash[17453]: cluster 2026-03-10T11:43:39.277252+0000 mgr.x (mgr.24770) 66 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:40.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:40 vm07 bash[17804]: cluster 2026-03-10T11:43:39.277252+0000 mgr.x (mgr.24770) 66 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:42 vm05 bash[22470]: audit 2026-03-10T11:43:41.220730+0000 mgr.x (mgr.24770) 67 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:42 vm05 bash[22470]: cluster 2026-03-10T11:43:41.277743+0000 mgr.x (mgr.24770) 68 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:42.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:42 vm05 bash[17453]: audit 2026-03-10T11:43:41.220730+0000 mgr.x (mgr.24770) 67 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:42.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:42 vm05 bash[17453]: cluster 2026-03-10T11:43:41.277743+0000 mgr.x (mgr.24770) 68 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:42.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:42 vm07 bash[17804]: audit 2026-03-10T11:43:41.220730+0000 mgr.x (mgr.24770) 67 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:42.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:42 vm07 bash[17804]: cluster 2026-03-10T11:43:41.277743+0000 mgr.x (mgr.24770) 68 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:43:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:43:44] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-10T11:43:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:45 vm05 bash[22470]: cluster 2026-03-10T11:43:43.278075+0000 mgr.x (mgr.24770) 69 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:45 vm05 bash[17453]: cluster 2026-03-10T11:43:43.278075+0000 mgr.x (mgr.24770) 69 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:45 vm07 bash[17804]: cluster 2026-03-10T11:43:43.278075+0000 mgr.x (mgr.24770) 69 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:47.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:47 vm05 bash[22470]: cluster 2026-03-10T11:43:45.278693+0000 mgr.x (mgr.24770) 70 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:47.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:47 vm05 bash[17453]: cluster 2026-03-10T11:43:45.278693+0000 mgr.x (mgr.24770) 70 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:47 vm07 bash[17804]: cluster 2026-03-10T11:43:45.278693+0000 mgr.x (mgr.24770) 70 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:49 vm05 bash[22470]: cluster 2026-03-10T11:43:47.278998+0000 mgr.x (mgr.24770) 71 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:49 vm05 bash[17453]: cluster 2026-03-10T11:43:47.278998+0000 mgr.x (mgr.24770) 71 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:49.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:49 vm07 bash[17804]: cluster 2026-03-10T11:43:47.278998+0000 mgr.x (mgr.24770) 71 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:51 vm05 bash[22470]: cluster 2026-03-10T11:43:49.279305+0000 mgr.x (mgr.24770) 72 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:51 vm05 bash[17453]: cluster 2026-03-10T11:43:49.279305+0000 mgr.x (mgr.24770) 72 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:51 vm07 bash[17804]: cluster 2026-03-10T11:43:49.279305+0000 mgr.x (mgr.24770) 72 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:53 vm05 bash[22470]: audit 2026-03-10T11:43:51.228635+0000 mgr.x (mgr.24770) 73 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:53 vm05 bash[22470]: cluster 2026-03-10T11:43:51.279819+0000 mgr.x (mgr.24770) 74 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:53 vm05 bash[17453]: audit 2026-03-10T11:43:51.228635+0000 mgr.x (mgr.24770) 73 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:53 vm05 bash[17453]: cluster 2026-03-10T11:43:51.279819+0000 mgr.x (mgr.24770) 74 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:53 vm07 bash[17804]: audit 2026-03-10T11:43:51.228635+0000 mgr.x (mgr.24770) 73 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:43:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:53 vm07 bash[17804]: cluster 2026-03-10T11:43:51.279819+0000 mgr.x (mgr.24770) 74 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:54 vm05 bash[22470]: audit 2026-03-10T11:43:53.621931+0000 mon.b (mon.2) 258 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:54 vm05 bash[17453]: audit 2026-03-10T11:43:53.621931+0000 mon.b (mon.2) 258 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:54.407 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:54 vm07 bash[17804]: audit 2026-03-10T11:43:53.621931+0000 mon.b (mon.2) 258 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:43:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:43:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:43:54] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-10T11:43:55.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:55 vm05 bash[22470]: cluster 2026-03-10T11:43:53.280088+0000 mgr.x (mgr.24770) 75 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:55.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:55 vm05 bash[17453]: cluster 2026-03-10T11:43:53.280088+0000 mgr.x (mgr.24770) 75 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:55 vm07 bash[17804]: cluster 2026-03-10T11:43:53.280088+0000 mgr.x (mgr.24770) 75 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:57 vm05 bash[22470]: cluster 2026-03-10T11:43:55.280619+0000 mgr.x (mgr.24770) 76 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:57 vm05 bash[17453]: cluster 2026-03-10T11:43:55.280619+0000 mgr.x (mgr.24770) 76 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:57 vm07 bash[17804]: cluster 2026-03-10T11:43:55.280619+0000 mgr.x (mgr.24770) 76 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:43:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:43:59 vm05 bash[22470]: cluster 2026-03-10T11:43:57.280940+0000 mgr.x (mgr.24770) 77 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:43:59 vm05 bash[17453]: cluster 2026-03-10T11:43:57.280940+0000 mgr.x (mgr.24770) 77 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:43:59.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:43:59 vm07 bash[17804]: cluster 2026-03-10T11:43:57.280940+0000 mgr.x (mgr.24770) 77 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:01 vm05 bash[22470]: cluster 2026-03-10T11:43:59.281295+0000 mgr.x (mgr.24770) 78 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:01 vm05 bash[17453]: cluster 2026-03-10T11:43:59.281295+0000 mgr.x (mgr.24770) 78 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:01 vm07 bash[17804]: cluster 2026-03-10T11:43:59.281295+0000 mgr.x (mgr.24770) 78 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:03 vm05 bash[22470]: audit 2026-03-10T11:44:01.239303+0000 mgr.x (mgr.24770) 79 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:03 vm05 bash[22470]: cluster 2026-03-10T11:44:01.281789+0000 mgr.x (mgr.24770) 80 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:03 vm05 bash[17453]: audit 2026-03-10T11:44:01.239303+0000 mgr.x (mgr.24770) 79 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:03 vm05 bash[17453]: cluster 2026-03-10T11:44:01.281789+0000 mgr.x (mgr.24770) 80 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:03 vm07 bash[17804]: audit 2026-03-10T11:44:01.239303+0000 mgr.x (mgr.24770) 79 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:03 vm07 bash[17804]: cluster 2026-03-10T11:44:01.281789+0000 mgr.x (mgr.24770) 80 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:44:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:44:04] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-10T11:44:05.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:05 vm05 bash[22470]: cluster 2026-03-10T11:44:03.282070+0000 mgr.x (mgr.24770) 81 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:05.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:05 vm05 bash[17453]: cluster 2026-03-10T11:44:03.282070+0000 mgr.x (mgr.24770) 81 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:05 vm07 bash[17804]: cluster 2026-03-10T11:44:03.282070+0000 mgr.x (mgr.24770) 81 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:07.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:07 vm05 bash[22470]: cluster 2026-03-10T11:44:05.282580+0000 mgr.x (mgr.24770) 82 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:07.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:07 vm05 bash[17453]: cluster 2026-03-10T11:44:05.282580+0000 mgr.x (mgr.24770) 82 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:07.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:07 vm07 bash[17804]: cluster 2026-03-10T11:44:05.282580+0000 mgr.x (mgr.24770) 82 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:09 vm05 bash[22470]: cluster 2026-03-10T11:44:07.282852+0000 mgr.x (mgr.24770) 83 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:09.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:09 vm05 bash[22470]: audit 2026-03-10T11:44:08.622042+0000 mon.b (mon.2) 259 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:09 vm05 bash[17453]: cluster 2026-03-10T11:44:07.282852+0000 mgr.x (mgr.24770) 83 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:09.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:09 vm05 bash[17453]: audit 2026-03-10T11:44:08.622042+0000 mon.b (mon.2) 259 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:09 vm07 bash[17804]: cluster 2026-03-10T11:44:07.282852+0000 mgr.x (mgr.24770) 83 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:09 vm07 bash[17804]: audit 2026-03-10T11:44:08.622042+0000 mon.b (mon.2) 259 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:11 vm07 bash[17804]: cluster 2026-03-10T11:44:09.283040+0000 mgr.x (mgr.24770) 84 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:11.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:11 vm05 bash[22470]: cluster 2026-03-10T11:44:09.283040+0000 mgr.x (mgr.24770) 84 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:11.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:11 vm05 bash[17453]: cluster 2026-03-10T11:44:09.283040+0000 mgr.x (mgr.24770) 84 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:13.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:13 vm07 bash[17804]: audit 2026-03-10T11:44:11.250023+0000 mgr.x (mgr.24770) 85 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:13.444 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:13 vm07 bash[17804]: cluster 2026-03-10T11:44:11.283503+0000 mgr.x (mgr.24770) 86 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:13.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:13 vm05 bash[22470]: audit 2026-03-10T11:44:11.250023+0000 mgr.x (mgr.24770) 85 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:13.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:13 vm05 bash[22470]: cluster 2026-03-10T11:44:11.283503+0000 mgr.x (mgr.24770) 86 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:13.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:13 vm05 bash[17453]: audit 2026-03-10T11:44:11.250023+0000 mgr.x (mgr.24770) 85 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:13.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:13 vm05 bash[17453]: cluster 2026-03-10T11:44:11.283503+0000 mgr.x (mgr.24770) 86 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:44:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:44:14] "GET /metrics HTTP/1.1" 200 37538 "" "Prometheus/2.51.0" 2026-03-10T11:44:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:15 vm07 bash[17804]: cluster 2026-03-10T11:44:13.283745+0000 mgr.x (mgr.24770) 87 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:15.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:15 vm05 bash[22470]: cluster 2026-03-10T11:44:13.283745+0000 mgr.x (mgr.24770) 87 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:15.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:15 vm05 bash[17453]: cluster 2026-03-10T11:44:13.283745+0000 mgr.x (mgr.24770) 87 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:17 vm07 bash[17804]: cluster 2026-03-10T11:44:15.284233+0000 mgr.x (mgr.24770) 88 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:17.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:17 vm05 bash[22470]: cluster 2026-03-10T11:44:15.284233+0000 mgr.x (mgr.24770) 88 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:17.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:17 vm05 bash[17453]: cluster 2026-03-10T11:44:15.284233+0000 mgr.x (mgr.24770) 88 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:19.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:19 vm07 bash[17804]: cluster 2026-03-10T11:44:17.284563+0000 mgr.x (mgr.24770) 89 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:19.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:19 vm05 bash[22470]: cluster 2026-03-10T11:44:17.284563+0000 mgr.x (mgr.24770) 89 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:19.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:19 vm05 bash[17453]: cluster 2026-03-10T11:44:17.284563+0000 mgr.x (mgr.24770) 89 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:21 vm07 bash[17804]: cluster 2026-03-10T11:44:19.284870+0000 mgr.x (mgr.24770) 90 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:21.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:21 vm05 bash[22470]: cluster 2026-03-10T11:44:19.284870+0000 mgr.x (mgr.24770) 90 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:21.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:21 vm05 bash[17453]: cluster 2026-03-10T11:44:19.284870+0000 mgr.x (mgr.24770) 90 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:23 vm07 bash[17804]: audit 2026-03-10T11:44:21.252849+0000 mgr.x (mgr.24770) 91 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:23 vm07 bash[17804]: cluster 2026-03-10T11:44:21.285410+0000 mgr.x (mgr.24770) 92 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:23 vm05 bash[22470]: audit 2026-03-10T11:44:21.252849+0000 mgr.x (mgr.24770) 91 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:23 vm05 bash[22470]: cluster 2026-03-10T11:44:21.285410+0000 mgr.x (mgr.24770) 92 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:23 vm05 bash[17453]: audit 2026-03-10T11:44:21.252849+0000 mgr.x (mgr.24770) 91 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:23 vm05 bash[17453]: cluster 2026-03-10T11:44:21.285410+0000 mgr.x (mgr.24770) 92 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:24.408 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:24 vm07 bash[17804]: audit 2026-03-10T11:44:23.625383+0000 mon.b (mon.2) 260 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:24.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:24 vm05 bash[22470]: audit 2026-03-10T11:44:23.625383+0000 mon.b (mon.2) 260 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:24.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:24 vm05 bash[17453]: audit 2026-03-10T11:44:23.625383+0000 mon.b (mon.2) 260 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:24.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:44:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:44:24] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-10T11:44:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:25 vm07 bash[17804]: cluster 2026-03-10T11:44:23.285721+0000 mgr.x (mgr.24770) 93 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:25.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:25 vm05 bash[22470]: cluster 2026-03-10T11:44:23.285721+0000 mgr.x (mgr.24770) 93 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:25.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:25 vm05 bash[17453]: cluster 2026-03-10T11:44:23.285721+0000 mgr.x (mgr.24770) 93 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:27 vm07 bash[17804]: cluster 2026-03-10T11:44:25.286303+0000 mgr.x (mgr.24770) 94 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:27 vm05 bash[22470]: cluster 2026-03-10T11:44:25.286303+0000 mgr.x (mgr.24770) 94 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:27 vm05 bash[17453]: cluster 2026-03-10T11:44:25.286303+0000 mgr.x (mgr.24770) 94 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:29 vm07 bash[17804]: cluster 2026-03-10T11:44:27.286609+0000 mgr.x (mgr.24770) 95 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:29 vm05 bash[17453]: cluster 2026-03-10T11:44:27.286609+0000 mgr.x (mgr.24770) 95 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:29.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:29 vm05 bash[22470]: cluster 2026-03-10T11:44:27.286609+0000 mgr.x (mgr.24770) 95 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:31.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:31 vm07 bash[17804]: cluster 2026-03-10T11:44:29.286893+0000 mgr.x (mgr.24770) 96 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:31.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:31 vm05 bash[22470]: cluster 2026-03-10T11:44:29.286893+0000 mgr.x (mgr.24770) 96 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:31.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:31 vm05 bash[17453]: cluster 2026-03-10T11:44:29.286893+0000 mgr.x (mgr.24770) 96 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:33 vm07 bash[17804]: audit 2026-03-10T11:44:31.262351+0000 mgr.x (mgr.24770) 97 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:33.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:33 vm07 bash[17804]: cluster 2026-03-10T11:44:31.287423+0000 mgr.x (mgr.24770) 98 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:33 vm05 bash[22470]: audit 2026-03-10T11:44:31.262351+0000 mgr.x (mgr.24770) 97 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:33.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:33 vm05 bash[22470]: cluster 2026-03-10T11:44:31.287423+0000 mgr.x (mgr.24770) 98 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:33 vm05 bash[17453]: audit 2026-03-10T11:44:31.262351+0000 mgr.x (mgr.24770) 97 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:33.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:33 vm05 bash[17453]: cluster 2026-03-10T11:44:31.287423+0000 mgr.x (mgr.24770) 98 : cluster [DBG] pgmap v73: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:44:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:44:34] "GET /metrics HTTP/1.1" 200 37545 "" "Prometheus/2.51.0" 2026-03-10T11:44:35.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:35 vm07 bash[17804]: cluster 2026-03-10T11:44:33.287766+0000 mgr.x (mgr.24770) 99 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:35.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:35 vm05 bash[22470]: cluster 2026-03-10T11:44:33.287766+0000 mgr.x (mgr.24770) 99 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:35.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:35 vm05 bash[17453]: cluster 2026-03-10T11:44:33.287766+0000 mgr.x (mgr.24770) 99 : cluster [DBG] pgmap v74: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:37.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:37 vm07 bash[17804]: cluster 2026-03-10T11:44:35.288292+0000 mgr.x (mgr.24770) 100 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:37.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:37 vm05 bash[17453]: cluster 2026-03-10T11:44:35.288292+0000 mgr.x (mgr.24770) 100 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:37.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:37 vm05 bash[22470]: cluster 2026-03-10T11:44:35.288292+0000 mgr.x (mgr.24770) 100 : cluster [DBG] pgmap v75: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:38 vm05 bash[22470]: audit 2026-03-10T11:44:37.402257+0000 mon.b (mon.2) 261 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:38 vm05 bash[22470]: audit 2026-03-10T11:44:37.699846+0000 mon.a (mon.0) 1039 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:38 vm05 bash[22470]: audit 2026-03-10T11:44:37.707116+0000 mon.a (mon.0) 1040 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:38 vm05 bash[22470]: audit 2026-03-10T11:44:38.041291+0000 mon.b (mon.2) 262 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:38 vm05 bash[22470]: audit 2026-03-10T11:44:38.042426+0000 mon.b (mon.2) 263 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:38 vm05 bash[22470]: audit 2026-03-10T11:44:38.049344+0000 mon.a (mon.0) 1041 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:38 vm05 bash[17453]: audit 2026-03-10T11:44:37.402257+0000 mon.b (mon.2) 261 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:38 vm05 bash[17453]: audit 2026-03-10T11:44:37.699846+0000 mon.a (mon.0) 1039 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:38 vm05 bash[17453]: audit 2026-03-10T11:44:37.707116+0000 mon.a (mon.0) 1040 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:38 vm05 bash[17453]: audit 2026-03-10T11:44:38.041291+0000 mon.b (mon.2) 262 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:38 vm05 bash[17453]: audit 2026-03-10T11:44:38.042426+0000 mon.b (mon.2) 263 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:44:38.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:38 vm05 bash[17453]: audit 2026-03-10T11:44:38.049344+0000 mon.a (mon.0) 1041 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:38 vm07 bash[17804]: audit 2026-03-10T11:44:37.402257+0000 mon.b (mon.2) 261 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:44:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:38 vm07 bash[17804]: audit 2026-03-10T11:44:37.699846+0000 mon.a (mon.0) 1039 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:38 vm07 bash[17804]: audit 2026-03-10T11:44:37.707116+0000 mon.a (mon.0) 1040 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:38 vm07 bash[17804]: audit 2026-03-10T11:44:38.041291+0000 mon.b (mon.2) 262 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:44:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:38 vm07 bash[17804]: audit 2026-03-10T11:44:38.042426+0000 mon.b (mon.2) 263 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:44:38.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:38 vm07 bash[17804]: audit 2026-03-10T11:44:38.049344+0000 mon.a (mon.0) 1041 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:44:39.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:39 vm05 bash[22470]: cluster 2026-03-10T11:44:37.288687+0000 mgr.x (mgr.24770) 101 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:39.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:39 vm05 bash[22470]: audit 2026-03-10T11:44:38.625568+0000 mon.b (mon.2) 264 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:39.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:39 vm05 bash[17453]: cluster 2026-03-10T11:44:37.288687+0000 mgr.x (mgr.24770) 101 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:39.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:39 vm05 bash[17453]: audit 2026-03-10T11:44:38.625568+0000 mon.b (mon.2) 264 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:39.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:39 vm07 bash[17804]: cluster 2026-03-10T11:44:37.288687+0000 mgr.x (mgr.24770) 101 : cluster [DBG] pgmap v76: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:39.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:39 vm07 bash[17804]: audit 2026-03-10T11:44:38.625568+0000 mon.b (mon.2) 264 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:41.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:41 vm05 bash[22470]: cluster 2026-03-10T11:44:39.289102+0000 mgr.x (mgr.24770) 102 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:41.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:41 vm05 bash[17453]: cluster 2026-03-10T11:44:39.289102+0000 mgr.x (mgr.24770) 102 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:41.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:41 vm07 bash[17804]: cluster 2026-03-10T11:44:39.289102+0000 mgr.x (mgr.24770) 102 : cluster [DBG] pgmap v77: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:42 vm05 bash[22470]: audit 2026-03-10T11:44:41.270992+0000 mgr.x (mgr.24770) 103 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:42 vm05 bash[22470]: cluster 2026-03-10T11:44:41.289747+0000 mgr.x (mgr.24770) 104 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:42.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:42 vm05 bash[17453]: audit 2026-03-10T11:44:41.270992+0000 mgr.x (mgr.24770) 103 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:42.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:42 vm05 bash[17453]: cluster 2026-03-10T11:44:41.289747+0000 mgr.x (mgr.24770) 104 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:42.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:42 vm07 bash[17804]: audit 2026-03-10T11:44:41.270992+0000 mgr.x (mgr.24770) 103 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:42.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:42 vm07 bash[17804]: cluster 2026-03-10T11:44:41.289747+0000 mgr.x (mgr.24770) 104 : cluster [DBG] pgmap v78: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:44:44 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:44:44] "GET /metrics HTTP/1.1" 200 37557 "" "Prometheus/2.51.0" 2026-03-10T11:44:45.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:45 vm05 bash[22470]: cluster 2026-03-10T11:44:43.290126+0000 mgr.x (mgr.24770) 105 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:45.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:45 vm05 bash[17453]: cluster 2026-03-10T11:44:43.290126+0000 mgr.x (mgr.24770) 105 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:45.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:45 vm07 bash[17804]: cluster 2026-03-10T11:44:43.290126+0000 mgr.x (mgr.24770) 105 : cluster [DBG] pgmap v79: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:47.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:47 vm05 bash[22470]: cluster 2026-03-10T11:44:45.290679+0000 mgr.x (mgr.24770) 106 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:47.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:47 vm05 bash[17453]: cluster 2026-03-10T11:44:45.290679+0000 mgr.x (mgr.24770) 106 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:47 vm07 bash[17804]: cluster 2026-03-10T11:44:45.290679+0000 mgr.x (mgr.24770) 106 : cluster [DBG] pgmap v80: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:49.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:49 vm05 bash[22470]: cluster 2026-03-10T11:44:47.290997+0000 mgr.x (mgr.24770) 107 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:49.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:49 vm05 bash[17453]: cluster 2026-03-10T11:44:47.290997+0000 mgr.x (mgr.24770) 107 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:49.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:49 vm07 bash[17804]: cluster 2026-03-10T11:44:47.290997+0000 mgr.x (mgr.24770) 107 : cluster [DBG] pgmap v81: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:51.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:51 vm05 bash[17453]: cluster 2026-03-10T11:44:49.291318+0000 mgr.x (mgr.24770) 108 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:51.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:51 vm05 bash[22470]: cluster 2026-03-10T11:44:49.291318+0000 mgr.x (mgr.24770) 108 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:51 vm07 bash[17804]: cluster 2026-03-10T11:44:49.291318+0000 mgr.x (mgr.24770) 108 : cluster [DBG] pgmap v82: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-10T11:44:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:53 vm05 bash[17453]: audit 2026-03-10T11:44:51.281754+0000 mgr.x (mgr.24770) 109 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:53 vm05 bash[17453]: cluster 2026-03-10T11:44:51.291918+0000 mgr.x (mgr.24770) 110 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:53 vm05 bash[22470]: audit 2026-03-10T11:44:51.281754+0000 mgr.x (mgr.24770) 109 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:53 vm05 bash[22470]: cluster 2026-03-10T11:44:51.291918+0000 mgr.x (mgr.24770) 110 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:53 vm07 bash[17804]: audit 2026-03-10T11:44:51.281754+0000 mgr.x (mgr.24770) 109 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:44:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:53 vm07 bash[17804]: cluster 2026-03-10T11:44:51.291918+0000 mgr.x (mgr.24770) 110 : cluster [DBG] pgmap v83: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:54 vm05 bash[17453]: audit 2026-03-10T11:44:53.625700+0000 mon.b (mon.2) 265 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:54 vm05 bash[22470]: audit 2026-03-10T11:44:53.625700+0000 mon.b (mon.2) 265 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:54.408 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:54 vm07 bash[17804]: audit 2026-03-10T11:44:53.625700+0000 mon.b (mon.2) 265 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:44:54.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:44:54 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:44:54] "GET /metrics HTTP/1.1" 200 37553 "" "Prometheus/2.51.0" 2026-03-10T11:44:55.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:55 vm05 bash[17453]: cluster 2026-03-10T11:44:53.292233+0000 mgr.x (mgr.24770) 111 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:55.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:55 vm05 bash[22470]: cluster 2026-03-10T11:44:53.292233+0000 mgr.x (mgr.24770) 111 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:55 vm07 bash[17804]: cluster 2026-03-10T11:44:53.292233+0000 mgr.x (mgr.24770) 111 : cluster [DBG] pgmap v84: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:57 vm05 bash[22470]: cluster 2026-03-10T11:44:55.292748+0000 mgr.x (mgr.24770) 112 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:57 vm05 bash[17453]: cluster 2026-03-10T11:44:55.292748+0000 mgr.x (mgr.24770) 112 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:57 vm07 bash[17804]: cluster 2026-03-10T11:44:55.292748+0000 mgr.x (mgr.24770) 112 : cluster [DBG] pgmap v85: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:44:59.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:44:59 vm05 bash[22470]: cluster 2026-03-10T11:44:57.293101+0000 mgr.x (mgr.24770) 113 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:44:59 vm05 bash[17453]: cluster 2026-03-10T11:44:57.293101+0000 mgr.x (mgr.24770) 113 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:44:59.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:44:59 vm07 bash[17804]: cluster 2026-03-10T11:44:57.293101+0000 mgr.x (mgr.24770) 113 : cluster [DBG] pgmap v86: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:01 vm05 bash[22470]: cluster 2026-03-10T11:44:59.293476+0000 mgr.x (mgr.24770) 114 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:01 vm05 bash[17453]: cluster 2026-03-10T11:44:59.293476+0000 mgr.x (mgr.24770) 114 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:01 vm07 bash[17804]: cluster 2026-03-10T11:44:59.293476+0000 mgr.x (mgr.24770) 114 : cluster [DBG] pgmap v87: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:03 vm05 bash[22470]: audit 2026-03-10T11:45:01.288171+0000 mgr.x (mgr.24770) 115 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:03.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:03 vm05 bash[22470]: cluster 2026-03-10T11:45:01.293970+0000 mgr.x (mgr.24770) 116 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:03 vm05 bash[17453]: audit 2026-03-10T11:45:01.288171+0000 mgr.x (mgr.24770) 115 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:03.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:03 vm05 bash[17453]: cluster 2026-03-10T11:45:01.293970+0000 mgr.x (mgr.24770) 116 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:03 vm07 bash[17804]: audit 2026-03-10T11:45:01.288171+0000 mgr.x (mgr.24770) 115 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:03 vm07 bash[17804]: cluster 2026-03-10T11:45:01.293970+0000 mgr.x (mgr.24770) 116 : cluster [DBG] pgmap v88: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:04.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:04 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:45:04] "GET /metrics HTTP/1.1" 200 37553 "" "Prometheus/2.51.0" 2026-03-10T11:45:05.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:05 vm07 bash[17804]: cluster 2026-03-10T11:45:03.294298+0000 mgr.x (mgr.24770) 117 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:05.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:05 vm05 bash[22470]: cluster 2026-03-10T11:45:03.294298+0000 mgr.x (mgr.24770) 117 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:05.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:05 vm05 bash[17453]: cluster 2026-03-10T11:45:03.294298+0000 mgr.x (mgr.24770) 117 : cluster [DBG] pgmap v89: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:07 vm07 bash[17804]: cluster 2026-03-10T11:45:05.294867+0000 mgr.x (mgr.24770) 118 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:07.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:07 vm05 bash[22470]: cluster 2026-03-10T11:45:05.294867+0000 mgr.x (mgr.24770) 118 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:07.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:07 vm05 bash[17453]: cluster 2026-03-10T11:45:05.294867+0000 mgr.x (mgr.24770) 118 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:09 vm07 bash[17804]: cluster 2026-03-10T11:45:07.295194+0000 mgr.x (mgr.24770) 119 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:09 vm07 bash[17804]: audit 2026-03-10T11:45:08.625937+0000 mon.b (mon.2) 266 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:09.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:09 vm05 bash[22470]: cluster 2026-03-10T11:45:07.295194+0000 mgr.x (mgr.24770) 119 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:09.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:09 vm05 bash[22470]: audit 2026-03-10T11:45:08.625937+0000 mon.b (mon.2) 266 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:09.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:09 vm05 bash[17453]: cluster 2026-03-10T11:45:07.295194+0000 mgr.x (mgr.24770) 119 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:09.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:09 vm05 bash[17453]: audit 2026-03-10T11:45:08.625937+0000 mon.b (mon.2) 266 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:11 vm07 bash[17804]: cluster 2026-03-10T11:45:09.295530+0000 mgr.x (mgr.24770) 120 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:11.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:11 vm05 bash[22470]: cluster 2026-03-10T11:45:09.295530+0000 mgr.x (mgr.24770) 120 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:11.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:11 vm05 bash[17453]: cluster 2026-03-10T11:45:09.295530+0000 mgr.x (mgr.24770) 120 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:13 vm07 bash[17804]: cluster 2026-03-10T11:45:11.296146+0000 mgr.x (mgr.24770) 121 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:13 vm07 bash[17804]: audit 2026-03-10T11:45:11.298560+0000 mgr.x (mgr.24770) 122 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:13.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:13 vm05 bash[22470]: cluster 2026-03-10T11:45:11.296146+0000 mgr.x (mgr.24770) 121 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:13.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:13 vm05 bash[22470]: audit 2026-03-10T11:45:11.298560+0000 mgr.x (mgr.24770) 122 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:13.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:13 vm05 bash[17453]: cluster 2026-03-10T11:45:11.296146+0000 mgr.x (mgr.24770) 121 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:13.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:13 vm05 bash[17453]: audit 2026-03-10T11:45:11.298560+0000 mgr.x (mgr.24770) 122 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:14.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:14 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:45:14] "GET /metrics HTTP/1.1" 200 37557 "" "Prometheus/2.51.0" 2026-03-10T11:45:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:15 vm07 bash[17804]: cluster 2026-03-10T11:45:13.296546+0000 mgr.x (mgr.24770) 123 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:15.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:15 vm05 bash[22470]: cluster 2026-03-10T11:45:13.296546+0000 mgr.x (mgr.24770) 123 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:15.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:15 vm05 bash[17453]: cluster 2026-03-10T11:45:13.296546+0000 mgr.x (mgr.24770) 123 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:17 vm07 bash[17804]: cluster 2026-03-10T11:45:15.297118+0000 mgr.x (mgr.24770) 124 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:17.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:17 vm05 bash[22470]: cluster 2026-03-10T11:45:15.297118+0000 mgr.x (mgr.24770) 124 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:17.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:17 vm05 bash[17453]: cluster 2026-03-10T11:45:15.297118+0000 mgr.x (mgr.24770) 124 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:19.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:19 vm07 bash[17804]: cluster 2026-03-10T11:45:17.297484+0000 mgr.x (mgr.24770) 125 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:19.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:19 vm05 bash[22470]: cluster 2026-03-10T11:45:17.297484+0000 mgr.x (mgr.24770) 125 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:19.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:19 vm05 bash[17453]: cluster 2026-03-10T11:45:17.297484+0000 mgr.x (mgr.24770) 125 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:21.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:21 vm07 bash[17804]: cluster 2026-03-10T11:45:19.297831+0000 mgr.x (mgr.24770) 126 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:21.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:21 vm05 bash[22470]: cluster 2026-03-10T11:45:19.297831+0000 mgr.x (mgr.24770) 126 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:21.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:21 vm05 bash[17453]: cluster 2026-03-10T11:45:19.297831+0000 mgr.x (mgr.24770) 126 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:23 vm07 bash[17804]: cluster 2026-03-10T11:45:21.298481+0000 mgr.x (mgr.24770) 127 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:23 vm07 bash[17804]: audit 2026-03-10T11:45:21.309296+0000 mgr.x (mgr.24770) 128 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:23 vm05 bash[22470]: cluster 2026-03-10T11:45:21.298481+0000 mgr.x (mgr.24770) 127 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:23.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:23 vm05 bash[22470]: audit 2026-03-10T11:45:21.309296+0000 mgr.x (mgr.24770) 128 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:23 vm05 bash[17453]: cluster 2026-03-10T11:45:21.298481+0000 mgr.x (mgr.24770) 127 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:23.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:23 vm05 bash[17453]: audit 2026-03-10T11:45:21.309296+0000 mgr.x (mgr.24770) 128 : audit [DBG] from='client.25003 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:23.733 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (11m) 2m ago 18m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (11m) 2m ago 18m 39.5M - dad864ee21e9 ea7bd1695c30 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 2m ago 18m 66.0M - 3.5 e1d6a67b021e 9843c1aec53b 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283 running (13m) 2m ago 21m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 29cf7638c524 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (9m) 2m ago 22m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (22m) 2m ago 22m 64.4M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (21m) 2m ago 21m 51.4M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (21m) 2m ago 21m 48.0M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (11m) 2m ago 19m 7875k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (11m) 2m ago 19m 7875k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (21m) 2m ago 21m 52.1M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (21m) 2m ago 21m 54.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (20m) 2m ago 20m 51.1M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (20m) 2m ago 20m 53.7M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (20m) 2m ago 20m 54.1M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (19m) 2m ago 19m 50.7M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (19m) 2m ago 19m 49.2M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (19m) 2m ago 19m 51.8M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 2m ago 18m 44.1M - 2.51.0 1d3b7f56885b 81a38d7a1570 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (18m) 2m ago 18m 86.3M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:45:24.233 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (18m) 2m ago 18m 87.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:45:24.245 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:24 vm05 bash[22470]: audit 2026-03-10T11:45:23.625798+0000 mon.b (mon.2) 267 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:24.245 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:24 vm05 bash[17453]: audit 2026-03-10T11:45:23.625798+0000 mon.b (mon.2) 267 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:24.292 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls' 2026-03-10T11:45:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:24 vm07 bash[17804]: audit 2026-03-10T11:45:23.625798+0000 mon.b (mon.2) 267 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:24.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:24 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:45:24] "GET /metrics HTTP/1.1" 200 37553 "" "Prometheus/2.51.0" 2026-03-10T11:45:24.775 INFO:teuthology.orchestra.run.vm05.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-10T11:45:24.775 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager ?:9093,9094 1/1 2m ago 19m vm05=a;count:1 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:grafana ?:3000 1/1 2m ago 19m vm07=a;count:1 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo ?:5000 1/1 2m ago 18m count:1 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:mgr 2/2 2m ago 21m vm05=y;vm07=x;count:2 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:mon 3/3 2m ago 21m vm05:192.168.123.105=a;vm05:[v2:192.168.123.105:3301,v1:192.168.123.105:6790]=c;vm07:192.168.123.107=b;count:3 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter ?:9100 2/2 2m ago 19m vm05=a;vm07=b;count:2 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:osd 8 2m ago - 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:prometheus ?:9095 1/1 2m ago 19m vm07=a;count:1 2026-03-10T11:45:24.776 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo ?:8000 2/2 2m ago 18m count:2 2026-03-10T11:45:24.848 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:45:25.383 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:45:25.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:25 vm05 bash[17453]: cluster 2026-03-10T11:45:23.298770+0000 mgr.x (mgr.24770) 129 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:25.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:25 vm05 bash[22470]: cluster 2026-03-10T11:45:23.298770+0000 mgr.x (mgr.24770) 129 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:25 vm07 bash[17804]: cluster 2026-03-10T11:45:23.298770+0000 mgr.x (mgr.24770) 129 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:25.452 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mgr' 2026-03-10T11:45:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:26 vm07 bash[17804]: audit 2026-03-10T11:45:24.230363+0000 mgr.x (mgr.24770) 130 : audit [DBG] from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:26 vm07 bash[17804]: audit 2026-03-10T11:45:24.774746+0000 mgr.x (mgr.24770) 131 : audit [DBG] from='client.25045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:26.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:26 vm07 bash[17804]: audit 2026-03-10T11:45:25.386318+0000 mon.a (mon.0) 1042 : audit [DBG] from='client.? 192.168.123.105:0/4270538345' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:26.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:26 vm05 bash[17453]: audit 2026-03-10T11:45:24.230363+0000 mgr.x (mgr.24770) 130 : audit [DBG] from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:26.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:26 vm05 bash[17453]: audit 2026-03-10T11:45:24.774746+0000 mgr.x (mgr.24770) 131 : audit [DBG] from='client.25045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:26.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:26 vm05 bash[17453]: audit 2026-03-10T11:45:25.386318+0000 mon.a (mon.0) 1042 : audit [DBG] from='client.? 192.168.123.105:0/4270538345' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:26.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:26 vm05 bash[22470]: audit 2026-03-10T11:45:24.230363+0000 mgr.x (mgr.24770) 130 : audit [DBG] from='client.15132 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:26.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:26 vm05 bash[22470]: audit 2026-03-10T11:45:24.774746+0000 mgr.x (mgr.24770) 131 : audit [DBG] from='client.25045 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:26.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:26 vm05 bash[22470]: audit 2026-03-10T11:45:25.386318+0000 mon.a (mon.0) 1042 : audit [DBG] from='client.? 192.168.123.105:0/4270538345' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:27 vm07 bash[17804]: cluster 2026-03-10T11:45:25.299380+0000 mgr.x (mgr.24770) 132 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:27.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:27 vm07 bash[17804]: audit 2026-03-10T11:45:25.962343+0000 mgr.x (mgr.24770) 133 : audit [DBG] from='client.25054 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:27.447 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:27 vm05 bash[22470]: cluster 2026-03-10T11:45:25.299380+0000 mgr.x (mgr.24770) 132 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:27.447 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:27 vm05 bash[22470]: audit 2026-03-10T11:45:25.962343+0000 mgr.x (mgr.24770) 133 : audit [DBG] from='client.25054 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:27.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:27 vm05 bash[17453]: cluster 2026-03-10T11:45:25.299380+0000 mgr.x (mgr.24770) 132 : cluster [DBG] pgmap v100: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:27.447 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:27 vm05 bash[17453]: audit 2026-03-10T11:45:25.962343+0000 mgr.x (mgr.24770) 133 : audit [DBG] from='client.25054 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mgr", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:27.482 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:27.543 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:45:28.040 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:45:28.470 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:45:28.470 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (12m) 2m ago 18m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:45:28.470 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (11m) 2m ago 18m 39.5M - dad864ee21e9 ea7bd1695c30 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 2m ago 18m 66.0M - 3.5 e1d6a67b021e 9843c1aec53b 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283 running (13m) 2m ago 21m 517M - 19.2.3-678-ge911bdeb 654f31e6858e 29cf7638c524 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (9m) 2m ago 22m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (22m) 2m ago 22m 64.4M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (21m) 2m ago 21m 51.4M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (21m) 2m ago 21m 48.0M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (11m) 2m ago 19m 7875k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (11m) 2m ago 19m 7875k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (21m) 2m ago 21m 52.1M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (21m) 2m ago 21m 54.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (20m) 2m ago 20m 51.1M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (20m) 2m ago 20m 53.7M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (20m) 2m ago 20m 54.1M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (20m) 2m ago 20m 50.7M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (19m) 2m ago 19m 49.2M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (19m) 2m ago 19m 51.8M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 2m ago 18m 44.1M - 2.51.0 1d3b7f56885b 81a38d7a1570 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (18m) 2m ago 18m 86.3M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:45:28.471 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (18m) 2m ago 18m 87.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: cluster 2026-03-10T11:45:27.299661+0000 mgr.x (mgr.24770) 134 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: cephadm 2026-03-10T11:45:27.474142+0000 mgr.x (mgr.24770) 135 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.481025+0000 mon.a (mon.0) 1043 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.481715+0000 mon.b (mon.2) 268 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.790023+0000 mon.a (mon.0) 1044 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.796580+0000 mon.a (mon.0) 1045 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.797740+0000 mon.b (mon.2) 269 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.798434+0000 mon.b (mon.2) 270 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:27.807970+0000 mon.a (mon.0) 1046 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: cephadm 2026-03-10T11:45:27.847953+0000 mgr.x (mgr.24770) 136 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:28 vm05 bash[17453]: audit 2026-03-10T11:45:28.023384+0000 mgr.x (mgr.24770) 137 : audit [DBG] from='client.15147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: cluster 2026-03-10T11:45:27.299661+0000 mgr.x (mgr.24770) 134 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: cephadm 2026-03-10T11:45:27.474142+0000 mgr.x (mgr.24770) 135 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.481025+0000 mon.a (mon.0) 1043 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.481715+0000 mon.b (mon.2) 268 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.790023+0000 mon.a (mon.0) 1044 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.796580+0000 mon.a (mon.0) 1045 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.797740+0000 mon.b (mon.2) 269 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.798434+0000 mon.b (mon.2) 270 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:27.807970+0000 mon.a (mon.0) 1046 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: cephadm 2026-03-10T11:45:27.847953+0000 mgr.x (mgr.24770) 136 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:28.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:28 vm05 bash[22470]: audit 2026-03-10T11:45:28.023384+0000 mgr.x (mgr.24770) 137 : audit [DBG] from='client.15147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:45:28.726 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:45:28.937 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: cluster 2026-03-10T11:45:27.299661+0000 mgr.x (mgr.24770) 134 : cluster [DBG] pgmap v101: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: cephadm 2026-03-10T11:45:27.474142+0000 mgr.x (mgr.24770) 135 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.481025+0000 mon.a (mon.0) 1043 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.481715+0000 mon.b (mon.2) 268 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.790023+0000 mon.a (mon.0) 1044 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.796580+0000 mon.a (mon.0) 1045 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.797740+0000 mon.b (mon.2) 269 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.798434+0000 mon.b (mon.2) 270 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:27.807970+0000 mon.a (mon.0) 1046 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: cephadm 2026-03-10T11:45:27.847953+0000 mgr.x (mgr.24770) 136 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:45:28.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:28 vm07 bash[17804]: audit 2026-03-10T11:45:28.023384+0000 mgr.x (mgr.24770) 137 : audit [DBG] from='client.15147 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:28.260062+0000 mgr.x (mgr.24770) 138 : audit [DBG] from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:28.467639+0000 mgr.x (mgr.24770) 139 : audit [DBG] from='client.24917 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:28.728499+0000 mon.c (mon.1) 63 : audit [DBG] from='client.? 192.168.123.105:0/1305897868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:28.938774+0000 mgr.x (mgr.24770) 140 : audit [DBG] from='client.15162 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:29.293822+0000 mon.a (mon.0) 1047 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:29.293867+0000 mon.b (mon.2) 271 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:29.296461+0000 mon.b (mon.2) 272 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: audit 2026-03-10T11:45:29.298522+0000 mon.a (mon.0) 1048 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:29 vm05 bash[22470]: cluster 2026-03-10T11:45:29.310798+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:28.260062+0000 mgr.x (mgr.24770) 138 : audit [DBG] from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:28.467639+0000 mgr.x (mgr.24770) 139 : audit [DBG] from='client.24917 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:28.728499+0000 mon.c (mon.1) 63 : audit [DBG] from='client.? 192.168.123.105:0/1305897868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:28.938774+0000 mgr.x (mgr.24770) 140 : audit [DBG] from='client.15162 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:29.293822+0000 mon.a (mon.0) 1047 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:29.293867+0000 mon.b (mon.2) 271 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:29.296461+0000 mon.b (mon.2) 272 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: audit 2026-03-10T11:45:29.298522+0000 mon.a (mon.0) 1048 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T11:45:29.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:29 vm05 bash[17453]: cluster 2026-03-10T11:45:29.310798+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:28.260062+0000 mgr.x (mgr.24770) 138 : audit [DBG] from='client.15150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:28.467639+0000 mgr.x (mgr.24770) 139 : audit [DBG] from='client.24917 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:28.728499+0000 mon.c (mon.1) 63 : audit [DBG] from='client.? 192.168.123.105:0/1305897868' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:28.938774+0000 mgr.x (mgr.24770) 140 : audit [DBG] from='client.15162 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:29.293822+0000 mon.a (mon.0) 1047 : audit [INF] from='mgr.24770 ' entity='mgr.x' 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:29.293867+0000 mon.b (mon.2) 271 : audit [DBG] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:29.296461+0000 mon.b (mon.2) 272 : audit [INF] from='mgr.24770 192.168.123.107:0/2148505876' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: audit 2026-03-10T11:45:29.298522+0000 mon.a (mon.0) 1048 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-10T11:45:29.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:29 vm07 bash[17804]: cluster 2026-03-10T11:45:29.310798+0000 mon.a (mon.0) 1049 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cephadm 2026-03-10T11:45:29.292899+0000 mgr.x (mgr.24770) 141 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cephadm 2026-03-10T11:45:29.292928+0000 mgr.x (mgr.24770) 142 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cephadm 2026-03-10T11:45:29.294497+0000 mgr.x (mgr.24770) 143 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cephadm 2026-03-10T11:45:29.294801+0000 mgr.x (mgr.24770) 144 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cephadm 2026-03-10T11:45:29.296335+0000 mgr.x (mgr.24770) 145 : cephadm [INF] Failing over to other MGR 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cluster 2026-03-10T11:45:29.299965+0000 mgr.x (mgr.24770) 146 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cluster 2026-03-10T11:45:30.144185+0000 mon.a (mon.0) 1050 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.302821+0000 mon.a (mon.0) 1051 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: cluster 2026-03-10T11:45:30.302951+0000 mon.a (mon.0) 1052 : cluster [DBG] mgrmap e36: y(active, starting, since 1.00306s), standbys: x 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.314997+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.315314+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.315581+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322118+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322208+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322295+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322356+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322577+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322655+0000 mon.c (mon.1) 72 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322736+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322810+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.322889+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.323006+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:45:30.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.323138+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.323267+0000 mon.c (mon.1) 78 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:30 vm05 bash[22470]: audit 2026-03-10T11:45:30.323370+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cephadm 2026-03-10T11:45:29.292899+0000 mgr.x (mgr.24770) 141 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cephadm 2026-03-10T11:45:29.292928+0000 mgr.x (mgr.24770) 142 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cephadm 2026-03-10T11:45:29.294497+0000 mgr.x (mgr.24770) 143 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cephadm 2026-03-10T11:45:29.294801+0000 mgr.x (mgr.24770) 144 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cephadm 2026-03-10T11:45:29.296335+0000 mgr.x (mgr.24770) 145 : cephadm [INF] Failing over to other MGR 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cluster 2026-03-10T11:45:29.299965+0000 mgr.x (mgr.24770) 146 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cluster 2026-03-10T11:45:30.144185+0000 mon.a (mon.0) 1050 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.302821+0000 mon.a (mon.0) 1051 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: cluster 2026-03-10T11:45:30.302951+0000 mon.a (mon.0) 1052 : cluster [DBG] mgrmap e36: y(active, starting, since 1.00306s), standbys: x 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.314997+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.315314+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.315581+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322118+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322208+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322295+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322356+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322577+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322655+0000 mon.c (mon.1) 72 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322736+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322810+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.322889+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.323006+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.323138+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.323267+0000 mon.c (mon.1) 78 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:30 vm05 bash[17453]: audit 2026-03-10T11:45:30.323370+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:45:30.593 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:30 vm05 bash[53899]: [10/Mar/2026:11:45:30] ENGINE Bus STOPPING 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: debug 2026-03-10T11:45:30.303+0000 7f7a25108640 -1 mgr handle_mgr_map I was active but no longer am 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: ignoring --setuser ceph since I am not root 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: ignoring --setgroup ceph since I am not root 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: debug 2026-03-10T11:45:30.427+0000 7f827ab29140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: debug 2026-03-10T11:45:30.463+0000 7f827ab29140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cephadm 2026-03-10T11:45:29.292899+0000 mgr.x (mgr.24770) 141 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cephadm 2026-03-10T11:45:29.292928+0000 mgr.x (mgr.24770) 142 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cephadm 2026-03-10T11:45:29.294497+0000 mgr.x (mgr.24770) 143 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cephadm 2026-03-10T11:45:29.294801+0000 mgr.x (mgr.24770) 144 : cephadm [INF] Upgrade: Need to upgrade myself (mgr.x) 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cephadm 2026-03-10T11:45:29.296335+0000 mgr.x (mgr.24770) 145 : cephadm [INF] Failing over to other MGR 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cluster 2026-03-10T11:45:29.299965+0000 mgr.x (mgr.24770) 146 : cluster [DBG] pgmap v102: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cluster 2026-03-10T11:45:30.144185+0000 mon.a (mon.0) 1050 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.302821+0000 mon.a (mon.0) 1051 : audit [INF] from='mgr.24770 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: cluster 2026-03-10T11:45:30.302951+0000 mon.a (mon.0) 1052 : cluster [DBG] mgrmap e36: y(active, starting, since 1.00306s), standbys: x 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.314997+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.315314+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.315581+0000 mon.c (mon.1) 66 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322118+0000 mon.c (mon.1) 67 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322208+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322295+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322356+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:45:30.595 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322577+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322655+0000 mon.c (mon.1) 72 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322736+0000 mon.c (mon.1) 73 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322810+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.322889+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.323006+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.323138+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.323267+0000 mon.c (mon.1) 78 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:45:30.596 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:30 vm07 bash[17804]: audit 2026-03-10T11:45:30.323370+0000 mon.c (mon.1) 79 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:45:30.915 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: debug 2026-03-10T11:45:30.591+0000 7f827ab29140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:45:31.029 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:30 vm05 bash[53899]: [10/Mar/2026:11:45:30] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:45:31.030 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:30 vm05 bash[53899]: [10/Mar/2026:11:45:30] ENGINE Bus STOPPED 2026-03-10T11:45:31.030 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:30 vm05 bash[53899]: [10/Mar/2026:11:45:30] ENGINE Bus STARTING 2026-03-10T11:45:31.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:30 vm07 bash[36672]: debug 2026-03-10T11:45:30.911+0000 7f827ab29140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:45:31.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:31 vm05 bash[53899]: [10/Mar/2026:11:45:31] ENGINE Serving on http://:::9283 2026-03-10T11:45:31.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:31 vm05 bash[53899]: [10/Mar/2026:11:45:31] ENGINE Bus STARTED 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: cluster 2026-03-10T11:45:30.771144+0000 mon.a (mon.0) 1053 : cluster [INF] Manager daemon y is now available 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: audit 2026-03-10T11:45:30.800356+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: audit 2026-03-10T11:45:30.812746+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: audit 2026-03-10T11:45:30.833801+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: audit 2026-03-10T11:45:30.834564+0000 mon.a (mon.0) 1054 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: audit 2026-03-10T11:45:30.884299+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:45:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:31 vm07 bash[17804]: audit 2026-03-10T11:45:30.885089+0000 mon.a (mon.0) 1055 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:45:31.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.415+0000 7f827ab29140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:45:31.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.503+0000 7f827ab29140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:45:31.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:45:31.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:45:31.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: from numpy import show_config as show_numpy_config 2026-03-10T11:45:31.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.647+0000 7f827ab29140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: cluster 2026-03-10T11:45:30.771144+0000 mon.a (mon.0) 1053 : cluster [INF] Manager daemon y is now available 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: audit 2026-03-10T11:45:30.800356+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: audit 2026-03-10T11:45:30.812746+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: audit 2026-03-10T11:45:30.833801+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: audit 2026-03-10T11:45:30.834564+0000 mon.a (mon.0) 1054 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: audit 2026-03-10T11:45:30.884299+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:31 vm05 bash[22470]: audit 2026-03-10T11:45:30.885089+0000 mon.a (mon.0) 1055 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: cluster 2026-03-10T11:45:30.771144+0000 mon.a (mon.0) 1053 : cluster [INF] Manager daemon y is now available 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: audit 2026-03-10T11:45:30.800356+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: audit 2026-03-10T11:45:30.812746+0000 mon.c (mon.1) 81 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: audit 2026-03-10T11:45:30.833801+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: audit 2026-03-10T11:45:30.834564+0000 mon.a (mon.0) 1054 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: audit 2026-03-10T11:45:30.884299+0000 mon.c (mon.1) 83 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:45:31.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:31 vm05 bash[17453]: audit 2026-03-10T11:45:30.885089+0000 mon.a (mon.0) 1055 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:45:32.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.803+0000 7f827ab29140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:45:32.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.855+0000 7f827ab29140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:45:32.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.903+0000 7f827ab29140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:45:32.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:31 vm07 bash[36672]: debug 2026-03-10T11:45:31.947+0000 7f827ab29140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:45:32.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:31.999+0000 7f827ab29140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:45:32.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:31 vm07 bash[42110]: ts=2026-03-10T11:45:31.772Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-10T11:45:32.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:31 vm07 bash[42110]: ts=2026-03-10T11:45:31.772Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-10T11:45:32.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:31 vm07 bash[42110]: ts=2026-03-10T11:45:31.772Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-10T11:45:32.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:31 vm07 bash[42110]: ts=2026-03-10T11:45:31.772Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-10T11:45:32.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:31 vm07 bash[42110]: ts=2026-03-10T11:45:31.772Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-10T11:45:32.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:31 vm07 bash[42110]: ts=2026-03-10T11:45:31.773Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.107:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.107:8765: connect: connection refused" 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.479+0000 7f827ab29140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.523+0000 7f827ab29140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.563+0000 7f827ab29140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.719+0000 7f827ab29140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cluster 2026-03-10T11:45:31.519404+0000 mon.a (mon.0) 1056 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cephadm 2026-03-10T11:45:31.653162+0000 mgr.y (mgr.24970) 3 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Bus STARTING 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cephadm 2026-03-10T11:45:31.761858+0000 mgr.y (mgr.24970) 4 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cephadm 2026-03-10T11:45:31.762605+0000 mgr.y (mgr.24970) 5 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Client ('192.168.123.105', 35322) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cephadm 2026-03-10T11:45:31.863525+0000 mgr.y (mgr.24970) 6 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cephadm 2026-03-10T11:45:31.863600+0000 mgr.y (mgr.24970) 7 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Bus STARTED 2026-03-10T11:45:32.765 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:32 vm07 bash[17804]: cluster 2026-03-10T11:45:32.317897+0000 mgr.y (mgr.24970) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cluster 2026-03-10T11:45:31.519404+0000 mon.a (mon.0) 1056 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cephadm 2026-03-10T11:45:31.653162+0000 mgr.y (mgr.24970) 3 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Bus STARTING 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cephadm 2026-03-10T11:45:31.761858+0000 mgr.y (mgr.24970) 4 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cephadm 2026-03-10T11:45:31.762605+0000 mgr.y (mgr.24970) 5 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Client ('192.168.123.105', 35322) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cephadm 2026-03-10T11:45:31.863525+0000 mgr.y (mgr.24970) 6 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cephadm 2026-03-10T11:45:31.863600+0000 mgr.y (mgr.24970) 7 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Bus STARTED 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:32 vm05 bash[22470]: cluster 2026-03-10T11:45:32.317897+0000 mgr.y (mgr.24970) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cluster 2026-03-10T11:45:31.519404+0000 mon.a (mon.0) 1056 : cluster [DBG] mgrmap e37: y(active, since 2s), standbys: x 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cephadm 2026-03-10T11:45:31.653162+0000 mgr.y (mgr.24970) 3 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Bus STARTING 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cephadm 2026-03-10T11:45:31.761858+0000 mgr.y (mgr.24970) 4 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cephadm 2026-03-10T11:45:31.762605+0000 mgr.y (mgr.24970) 5 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Client ('192.168.123.105', 35322) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cephadm 2026-03-10T11:45:31.863525+0000 mgr.y (mgr.24970) 6 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cephadm 2026-03-10T11:45:31.863600+0000 mgr.y (mgr.24970) 7 : cephadm [INF] [10/Mar/2026:11:45:31] ENGINE Bus STARTED 2026-03-10T11:45:32.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:32 vm05 bash[17453]: cluster 2026-03-10T11:45:32.317897+0000 mgr.y (mgr.24970) 8 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:33.124 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.763+0000 7f827ab29140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:45:33.124 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.811+0000 7f827ab29140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:45:33.124 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:32 vm07 bash[36672]: debug 2026-03-10T11:45:32.939+0000 7f827ab29140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:45:33.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: debug 2026-03-10T11:45:33.123+0000 7f827ab29140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:45:33.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: debug 2026-03-10T11:45:33.319+0000 7f827ab29140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:45:33.415 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: debug 2026-03-10T11:45:33.367+0000 7f827ab29140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:45:33.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: debug 2026-03-10T11:45:33.411+0000 7f827ab29140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:45:33.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: debug 2026-03-10T11:45:33.583+0000 7f827ab29140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:45:34.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: debug 2026-03-10T11:45:33.847+0000 7f827ab29140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:45:34.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: [10/Mar/2026:11:45:33] ENGINE Bus STARTING 2026-03-10T11:45:34.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: CherryPy Checker: 2026-03-10T11:45:34.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: The Application mounted at '' has an empty config. 2026-03-10T11:45:34.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: [10/Mar/2026:11:45:33] ENGINE Serving on http://:::9283 2026-03-10T11:45:34.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:33 vm07 bash[36672]: [10/Mar/2026:11:45:33] ENGINE Bus STARTED 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:34 vm07 bash[36672]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:45:34] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: cluster 2026-03-10T11:45:33.541202+0000 mon.a (mon.0) 1057 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: cluster 2026-03-10T11:45:33.855695+0000 mon.a (mon.0) 1058 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: cluster 2026-03-10T11:45:33.855838+0000 mon.a (mon.0) 1059 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: audit 2026-03-10T11:45:33.858252+0000 mon.b (mon.2) 273 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: audit 2026-03-10T11:45:33.858873+0000 mon.b (mon.2) 274 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: audit 2026-03-10T11:45:33.859707+0000 mon.b (mon.2) 275 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: audit 2026-03-10T11:45:33.860163+0000 mon.b (mon.2) 276 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:45:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:34 vm07 bash[17804]: cluster 2026-03-10T11:45:34.318352+0000 mgr.y (mgr.24970) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:34.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: cluster 2026-03-10T11:45:33.541202+0000 mon.a (mon.0) 1057 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: cluster 2026-03-10T11:45:33.855695+0000 mon.a (mon.0) 1058 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: cluster 2026-03-10T11:45:33.855838+0000 mon.a (mon.0) 1059 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: audit 2026-03-10T11:45:33.858252+0000 mon.b (mon.2) 273 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: audit 2026-03-10T11:45:33.858873+0000 mon.b (mon.2) 274 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: audit 2026-03-10T11:45:33.859707+0000 mon.b (mon.2) 275 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: audit 2026-03-10T11:45:33.860163+0000 mon.b (mon.2) 276 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:34 vm05 bash[22470]: cluster 2026-03-10T11:45:34.318352+0000 mgr.y (mgr.24970) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: cluster 2026-03-10T11:45:33.541202+0000 mon.a (mon.0) 1057 : cluster [DBG] mgrmap e38: y(active, since 4s), standbys: x 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: cluster 2026-03-10T11:45:33.855695+0000 mon.a (mon.0) 1058 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: cluster 2026-03-10T11:45:33.855838+0000 mon.a (mon.0) 1059 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: audit 2026-03-10T11:45:33.858252+0000 mon.b (mon.2) 273 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: audit 2026-03-10T11:45:33.858873+0000 mon.b (mon.2) 274 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: audit 2026-03-10T11:45:33.859707+0000 mon.b (mon.2) 275 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: audit 2026-03-10T11:45:33.860163+0000 mon.b (mon.2) 276 : audit [DBG] from='mgr.? 192.168.123.107:0/1724041300' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:45:34.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:34 vm05 bash[17453]: cluster 2026-03-10T11:45:34.318352+0000 mgr.y (mgr.24970) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:35.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:35 vm05 bash[17453]: cluster 2026-03-10T11:45:34.549620+0000 mon.a (mon.0) 1060 : cluster [DBG] mgrmap e39: y(active, since 5s), standbys: x 2026-03-10T11:45:35.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:35 vm05 bash[22470]: cluster 2026-03-10T11:45:34.549620+0000 mon.a (mon.0) 1060 : cluster [DBG] mgrmap e39: y(active, since 5s), standbys: x 2026-03-10T11:45:35.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:35 vm07 bash[17804]: cluster 2026-03-10T11:45:34.549620+0000 mon.a (mon.0) 1060 : cluster [DBG] mgrmap e39: y(active, since 5s), standbys: x 2026-03-10T11:45:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:36 vm05 bash[17453]: cluster 2026-03-10T11:45:35.549421+0000 mon.a (mon.0) 1061 : cluster [DBG] mgrmap e40: y(active, since 6s), standbys: x 2026-03-10T11:45:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:36 vm05 bash[17453]: cluster 2026-03-10T11:45:36.318760+0000 mgr.y (mgr.24970) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:36 vm05 bash[22470]: cluster 2026-03-10T11:45:35.549421+0000 mon.a (mon.0) 1061 : cluster [DBG] mgrmap e40: y(active, since 6s), standbys: x 2026-03-10T11:45:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:36 vm05 bash[22470]: cluster 2026-03-10T11:45:36.318760+0000 mgr.y (mgr.24970) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:36.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:36 vm07 bash[17804]: cluster 2026-03-10T11:45:35.549421+0000 mon.a (mon.0) 1061 : cluster [DBG] mgrmap e40: y(active, since 6s), standbys: x 2026-03-10T11:45:36.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:36 vm07 bash[17804]: cluster 2026-03-10T11:45:36.318760+0000 mgr.y (mgr.24970) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:36.752155+0000 mon.a (mon.0) 1062 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:36.762726+0000 mon.a (mon.0) 1063 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:36.928640+0000 mon.a (mon.0) 1064 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:36.939514+0000 mon.a (mon.0) 1065 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.458065+0000 mon.a (mon.0) 1066 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.468828+0000 mon.a (mon.0) 1067 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.473549+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.473883+0000 mon.a (mon.0) 1068 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.644809+0000 mon.a (mon.0) 1069 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.654927+0000 mon.a (mon.0) 1070 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.659127+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.659390+0000 mon.a (mon.0) 1071 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.660650+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:37 vm05 bash[17453]: audit 2026-03-10T11:45:37.661559+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:36.752155+0000 mon.a (mon.0) 1062 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:36.762726+0000 mon.a (mon.0) 1063 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:36.928640+0000 mon.a (mon.0) 1064 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:36.939514+0000 mon.a (mon.0) 1065 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.458065+0000 mon.a (mon.0) 1066 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.468828+0000 mon.a (mon.0) 1067 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.473549+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.473883+0000 mon.a (mon.0) 1068 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.644809+0000 mon.a (mon.0) 1069 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.654927+0000 mon.a (mon.0) 1070 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.659127+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.659390+0000 mon.a (mon.0) 1071 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.660650+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:38.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:37 vm05 bash[22470]: audit 2026-03-10T11:45:37.661559+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:36.752155+0000 mon.a (mon.0) 1062 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:36.762726+0000 mon.a (mon.0) 1063 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:36.928640+0000 mon.a (mon.0) 1064 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:36.939514+0000 mon.a (mon.0) 1065 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.458065+0000 mon.a (mon.0) 1066 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.468828+0000 mon.a (mon.0) 1067 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.473549+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.473883+0000 mon.a (mon.0) 1068 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.644809+0000 mon.a (mon.0) 1069 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.654927+0000 mon.a (mon.0) 1070 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.659127+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.659390+0000 mon.a (mon.0) 1071 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.660650+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:38.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:37 vm07 bash[17804]: audit 2026-03-10T11:45:37.661559+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.662703+0000 mgr.y (mgr.24970) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.662941+0000 mgr.y (mgr.24970) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.696135+0000 mgr.y (mgr.24970) 13 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.701783+0000 mgr.y (mgr.24970) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.726704+0000 mgr.y (mgr.24970) 15 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.741876+0000 mgr.y (mgr.24970) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.774488+0000 mgr.y (mgr.24970) 17 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.798751+0000 mgr.y (mgr.24970) 18 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.833976+0000 mon.a (mon.0) 1072 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.844220+0000 mon.a (mon.0) 1073 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.853092+0000 mon.a (mon.0) 1074 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.861044+0000 mon.a (mon.0) 1075 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.871445+0000 mon.a (mon.0) 1076 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.885435+0000 mgr.y (mgr.24970) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.885725+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.886206+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:37.891325+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cephadm 2026-03-10T11:45:37.892185+0000 mgr.y (mgr.24970) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: cluster 2026-03-10T11:45:38.319292+0000 mgr.y (mgr.24970) 21 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:38.462168+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:38 vm05 bash[22470]: audit 2026-03-10T11:45:38.469387+0000 mon.a (mon.0) 1079 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.662703+0000 mgr.y (mgr.24970) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:45:39.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.662941+0000 mgr.y (mgr.24970) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.696135+0000 mgr.y (mgr.24970) 13 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.701783+0000 mgr.y (mgr.24970) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.726704+0000 mgr.y (mgr.24970) 15 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.741876+0000 mgr.y (mgr.24970) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.774488+0000 mgr.y (mgr.24970) 17 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.798751+0000 mgr.y (mgr.24970) 18 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.833976+0000 mon.a (mon.0) 1072 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.844220+0000 mon.a (mon.0) 1073 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.853092+0000 mon.a (mon.0) 1074 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.861044+0000 mon.a (mon.0) 1075 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.871445+0000 mon.a (mon.0) 1076 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.885435+0000 mgr.y (mgr.24970) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.885725+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.886206+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:37.891325+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cephadm 2026-03-10T11:45:37.892185+0000 mgr.y (mgr.24970) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: cluster 2026-03-10T11:45:38.319292+0000 mgr.y (mgr.24970) 21 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:38.462168+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:38 vm05 bash[17453]: audit 2026-03-10T11:45:38.469387+0000 mon.a (mon.0) 1079 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.125 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.662703+0000 mgr.y (mgr.24970) 11 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.662941+0000 mgr.y (mgr.24970) 12 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.696135+0000 mgr.y (mgr.24970) 13 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.701783+0000 mgr.y (mgr.24970) 14 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.726704+0000 mgr.y (mgr.24970) 15 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.741876+0000 mgr.y (mgr.24970) 16 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.774488+0000 mgr.y (mgr.24970) 17 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.798751+0000 mgr.y (mgr.24970) 18 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.833976+0000 mon.a (mon.0) 1072 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.844220+0000 mon.a (mon.0) 1073 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.853092+0000 mon.a (mon.0) 1074 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.861044+0000 mon.a (mon.0) 1075 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.871445+0000 mon.a (mon.0) 1076 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.885435+0000 mgr.y (mgr.24970) 19 : cephadm [INF] Reconfiguring iscsi.foo.vm05.txapnk (dependencies changed)... 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.885725+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.886206+0000 mon.a (mon.0) 1077 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:37.891325+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cephadm 2026-03-10T11:45:37.892185+0000 mgr.y (mgr.24970) 20 : cephadm [INF] Reconfiguring daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: cluster 2026-03-10T11:45:38.319292+0000 mgr.y (mgr.24970) 21 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:38.462168+0000 mon.a (mon.0) 1078 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.126 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:38 vm07 bash[17804]: audit 2026-03-10T11:45:38.469387+0000 mon.a (mon.0) 1079 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:39.397 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 systemd[1]: Stopping Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:45:39.397 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.243Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:45:39.397 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.243Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T11:45:39.397 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.243Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.243Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.243Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.243Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.244Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.244Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.244Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.245Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.245Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[42110]: ts=2026-03-10T11:45:39.245Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43200]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus-a 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service: Deactivated successfully. 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 systemd[1]: Stopped Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:45:39.398 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 systemd[1]: Started Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.455Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.455Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.455Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm07 (none))" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.455Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.455Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.456Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.457Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.457Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.462Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.460Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.462Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.403µs 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.462Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.468Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=5 2026-03-10T11:45:39.695 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.488Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=5 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.496Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=2 maxSegment=5 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.499Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=3 maxSegment=5 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.503Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=4 maxSegment=5 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.510Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=5 maxSegment=5 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.510Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=33.854µs wal_replay_duration=47.935981ms wbl_replay_duration=360ns total_replay_duration=48.047218ms 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.512Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.512Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.512Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.524Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=11.577875ms db_storage=1.072µs remote_storage=972ns web_handler=541ns query_engine=441ns scrape=987.288µs scrape_sd=116.458µs notify=6.573µs notify_sd=5.711µs rules=9.970877ms tracing=3.796µs 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.524Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-10T11:45:39.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:39 vm07 bash[43274]: ts=2026-03-10T11:45:39.524Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: cephadm 2026-03-10T11:45:38.472421+0000 mgr.y (mgr.24970) 22 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: cephadm 2026-03-10T11:45:38.685384+0000 mgr.y (mgr.24970) 23 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.111802+0000 mon.b (mon.2) 277 : audit [DBG] from='client.? 192.168.123.105:0/736480971' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.358711+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.367515+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.373024+0000 mon.c (mon.1) 90 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.382679+0000 mon.a (mon.0) 1082 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.386030+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.387506+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.392696+0000 mon.a (mon.0) 1083 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.396105+0000 mon.c (mon.1) 93 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.428096+0000 mon.c (mon.1) 94 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.862075+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.862487+0000 mon.a (mon.0) 1084 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.863396+0000 mon.c (mon.1) 96 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:45:40.034 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 bash[17804]: audit 2026-03-10T11:45:39.864162+0000 mon.c (mon.1) 97 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: cephadm 2026-03-10T11:45:38.472421+0000 mgr.y (mgr.24970) 22 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: cephadm 2026-03-10T11:45:38.685384+0000 mgr.y (mgr.24970) 23 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.111802+0000 mon.b (mon.2) 277 : audit [DBG] from='client.? 192.168.123.105:0/736480971' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.358711+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.367515+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.373024+0000 mon.c (mon.1) 90 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.382679+0000 mon.a (mon.0) 1082 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.386030+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.387506+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.392696+0000 mon.a (mon.0) 1083 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.396105+0000 mon.c (mon.1) 93 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.428096+0000 mon.c (mon.1) 94 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.862075+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.862487+0000 mon.a (mon.0) 1084 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.863396+0000 mon.c (mon.1) 96 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:40 vm05 bash[22470]: audit 2026-03-10T11:45:39.864162+0000 mon.c (mon.1) 97 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: cephadm 2026-03-10T11:45:38.472421+0000 mgr.y (mgr.24970) 22 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: cephadm 2026-03-10T11:45:38.685384+0000 mgr.y (mgr.24970) 23 : cephadm [INF] Reconfiguring daemon prometheus.a on vm07 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.111802+0000 mon.b (mon.2) 277 : audit [DBG] from='client.? 192.168.123.105:0/736480971' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:45:40.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.358711+0000 mon.a (mon.0) 1080 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.367515+0000 mon.a (mon.0) 1081 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.373024+0000 mon.c (mon.1) 90 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.382679+0000 mon.a (mon.0) 1082 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.386030+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.387506+0000 mon.c (mon.1) 92 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.392696+0000 mon.a (mon.0) 1083 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.396105+0000 mon.c (mon.1) 93 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.428096+0000 mon.c (mon.1) 94 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.862075+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.862487+0000 mon.a (mon.0) 1084 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.863396+0000 mon.c (mon.1) 96 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:45:40.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:40 vm05 bash[17453]: audit 2026-03-10T11:45:39.864162+0000 mon.c (mon.1) 97 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:40.695 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: Stopping Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 bash[43548]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mgr-x 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x.service: Failed with result 'exit-code'. 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: Stopped Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:45:40.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.695 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.695 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.696 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.696 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.696 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.696 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.767 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.767 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.767 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.767 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.767 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.767 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:40.768 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:41.054 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:41.054 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: Started Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:39.373458+0000 mgr.y (mgr.24970) 24 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: cephadm 2026-03-10T11:45:39.385839+0000 mgr.y (mgr.24970) 25 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:39.386477+0000 mgr.y (mgr.24970) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:39.387834+0000 mgr.y (mgr.24970) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:39.396463+0000 mgr.y (mgr.24970) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: cephadm 2026-03-10T11:45:39.861748+0000 mgr.y (mgr.24970) 29 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: cephadm 2026-03-10T11:45:39.864848+0000 mgr.y (mgr.24970) 30 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: cluster 2026-03-10T11:45:40.319636+0000 mgr.y (mgr.24970) 31 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:40.885394+0000 mon.a (mon.0) 1085 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:40.897177+0000 mon.a (mon.0) 1086 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:41.055 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:41 vm07 bash[17804]: audit 2026-03-10T11:45:40.897978+0000 mon.c (mon.1) 98 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:41.055 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:45:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:45:41.317 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:41 vm07 bash[43660]: debug 2026-03-10T11:45:41.131+0000 7f67f6aab140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:45:41.317 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:41 vm07 bash[43660]: debug 2026-03-10T11:45:41.167+0000 7f67f6aab140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:39.373458+0000 mgr.y (mgr.24970) 24 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: cephadm 2026-03-10T11:45:39.385839+0000 mgr.y (mgr.24970) 25 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:39.386477+0000 mgr.y (mgr.24970) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:39.387834+0000 mgr.y (mgr.24970) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:39.396463+0000 mgr.y (mgr.24970) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: cephadm 2026-03-10T11:45:39.861748+0000 mgr.y (mgr.24970) 29 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: cephadm 2026-03-10T11:45:39.864848+0000 mgr.y (mgr.24970) 30 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: cluster 2026-03-10T11:45:40.319636+0000 mgr.y (mgr.24970) 31 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:40.885394+0000 mon.a (mon.0) 1085 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:40.897177+0000 mon.a (mon.0) 1086 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:41 vm05 bash[17453]: audit 2026-03-10T11:45:40.897978+0000 mon.c (mon.1) 98 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:39.373458+0000 mgr.y (mgr.24970) 24 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: cephadm 2026-03-10T11:45:39.385839+0000 mgr.y (mgr.24970) 25 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:39.386477+0000 mgr.y (mgr.24970) 26 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:39.387834+0000 mgr.y (mgr.24970) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:39.396463+0000 mgr.y (mgr.24970) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: cephadm 2026-03-10T11:45:39.861748+0000 mgr.y (mgr.24970) 29 : cephadm [INF] Upgrade: Updating mgr.x 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: cephadm 2026-03-10T11:45:39.864848+0000 mgr.y (mgr.24970) 30 : cephadm [INF] Deploying daemon mgr.x on vm07 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: cluster 2026-03-10T11:45:40.319636+0000 mgr.y (mgr.24970) 31 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 21 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:40.885394+0000 mon.a (mon.0) 1085 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:40.897177+0000 mon.a (mon.0) 1086 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:41.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:41 vm05 bash[22470]: audit 2026-03-10T11:45:40.897978+0000 mon.c (mon.1) 98 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:41.642 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:41 vm07 bash[43660]: debug 2026-03-10T11:45:41.315+0000 7f67f6aab140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:45:41.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:41 vm07 bash[43660]: debug 2026-03-10T11:45:41.639+0000 7f67f6aab140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:42 vm07 bash[17804]: cluster 2026-03-10T11:45:42.320256+0000 mgr.y (mgr.24970) 32 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.143+0000 7f67f6aab140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.235+0000 7f67f6aab140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: from numpy import show_config as show_numpy_config 2026-03-10T11:45:42.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.367+0000 7f67f6aab140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:45:42.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:42 vm05 bash[22470]: cluster 2026-03-10T11:45:42.320256+0000 mgr.y (mgr.24970) 32 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:45:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:42 vm05 bash[17453]: cluster 2026-03-10T11:45:42.320256+0000 mgr.y (mgr.24970) 32 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:45:42.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.515+0000 7f67f6aab140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:45:42.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.555+0000 7f67f6aab140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:45:42.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.595+0000 7f67f6aab140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:45:42.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.643+0000 7f67f6aab140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:45:42.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:42 vm07 bash[43660]: debug 2026-03-10T11:45:42.699+0000 7f67f6aab140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:45:43.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.175+0000 7f67f6aab140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:45:43.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.215+0000 7f67f6aab140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:45:43.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.255+0000 7f67f6aab140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:45:43.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.411+0000 7f67f6aab140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:45:43.789 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.455+0000 7f67f6aab140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:45:43.789 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.499+0000 7f67f6aab140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:45:43.789 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.615+0000 7f67f6aab140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:45:43.789 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.787+0000 7f67f6aab140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:45:44.060 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:43 vm07 bash[43660]: debug 2026-03-10T11:45:43.975+0000 7f67f6aab140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:45:44.060 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: debug 2026-03-10T11:45:44.015+0000 7f67f6aab140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:45:44.392 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: debug 2026-03-10T11:45:44.059+0000 7f67f6aab140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:45:44.392 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: debug 2026-03-10T11:45:44.215+0000 7f67f6aab140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:44 vm07 bash[17804]: cluster 2026-03-10T11:45:44.320610+0000 mgr.y (mgr.24970) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: debug 2026-03-10T11:45:44.459+0000 7f67f6aab140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: [10/Mar/2026:11:45:44] ENGINE Bus STARTING 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: CherryPy Checker: 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: The Application mounted at '' has an empty config. 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: [10/Mar/2026:11:45:44] ENGINE Serving on http://:::9283 2026-03-10T11:45:44.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:45:44 vm07 bash[43660]: [10/Mar/2026:11:45:44] ENGINE Bus STARTED 2026-03-10T11:45:44.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:44 vm05 bash[22470]: cluster 2026-03-10T11:45:44.320610+0000 mgr.y (mgr.24970) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:45:44.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:44 vm05 bash[17453]: cluster 2026-03-10T11:45:44.320610+0000 mgr.y (mgr.24970) 33 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:45:45.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:45 vm07 bash[17804]: cluster 2026-03-10T11:45:44.467239+0000 mon.a (mon.0) 1087 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:45:45.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:45 vm07 bash[17804]: cluster 2026-03-10T11:45:44.467570+0000 mon.a (mon.0) 1088 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:45.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:45 vm07 bash[17804]: audit 2026-03-10T11:45:44.467907+0000 mon.a (mon.0) 1089 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:45:45.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:45 vm07 bash[17804]: audit 2026-03-10T11:45:44.468567+0000 mon.a (mon.0) 1090 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:45:45.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:45 vm07 bash[17804]: audit 2026-03-10T11:45:44.469651+0000 mon.a (mon.0) 1091 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:45:45.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:45 vm07 bash[17804]: audit 2026-03-10T11:45:44.469918+0000 mon.a (mon.0) 1092 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:45 vm05 bash[17453]: cluster 2026-03-10T11:45:44.467239+0000 mon.a (mon.0) 1087 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:45 vm05 bash[17453]: cluster 2026-03-10T11:45:44.467570+0000 mon.a (mon.0) 1088 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:45 vm05 bash[17453]: audit 2026-03-10T11:45:44.467907+0000 mon.a (mon.0) 1089 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:45 vm05 bash[17453]: audit 2026-03-10T11:45:44.468567+0000 mon.a (mon.0) 1090 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:45 vm05 bash[17453]: audit 2026-03-10T11:45:44.469651+0000 mon.a (mon.0) 1091 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:45 vm05 bash[17453]: audit 2026-03-10T11:45:44.469918+0000 mon.a (mon.0) 1092 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:45 vm05 bash[22470]: cluster 2026-03-10T11:45:44.467239+0000 mon.a (mon.0) 1087 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:45 vm05 bash[22470]: cluster 2026-03-10T11:45:44.467570+0000 mon.a (mon.0) 1088 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:45 vm05 bash[22470]: audit 2026-03-10T11:45:44.467907+0000 mon.a (mon.0) 1089 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:45 vm05 bash[22470]: audit 2026-03-10T11:45:44.468567+0000 mon.a (mon.0) 1090 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:45 vm05 bash[22470]: audit 2026-03-10T11:45:44.469651+0000 mon.a (mon.0) 1091 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:45:45.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:45 vm05 bash[22470]: audit 2026-03-10T11:45:44.469918+0000 mon.a (mon.0) 1092 : audit [DBG] from='mgr.? 192.168.123.107:0/232739174' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:45:46.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:46 vm07 bash[17804]: cluster 2026-03-10T11:45:45.422841+0000 mon.a (mon.0) 1093 : cluster [DBG] mgrmap e41: y(active, since 16s), standbys: x 2026-03-10T11:45:46.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:46 vm07 bash[17804]: audit 2026-03-10T11:45:45.805613+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:46.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:46 vm07 bash[17804]: cluster 2026-03-10T11:45:46.320954+0000 mgr.y (mgr.24970) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:45:46.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:46 vm07 bash[17804]: audit 2026-03-10T11:45:46.345203+0000 mon.a (mon.0) 1094 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:46.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:46 vm07 bash[17804]: audit 2026-03-10T11:45:46.353312+0000 mon.a (mon.0) 1095 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:46 vm05 bash[22470]: cluster 2026-03-10T11:45:45.422841+0000 mon.a (mon.0) 1093 : cluster [DBG] mgrmap e41: y(active, since 16s), standbys: x 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:46 vm05 bash[22470]: audit 2026-03-10T11:45:45.805613+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:46 vm05 bash[22470]: cluster 2026-03-10T11:45:46.320954+0000 mgr.y (mgr.24970) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:46 vm05 bash[22470]: audit 2026-03-10T11:45:46.345203+0000 mon.a (mon.0) 1094 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:46 vm05 bash[22470]: audit 2026-03-10T11:45:46.353312+0000 mon.a (mon.0) 1095 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:46 vm05 bash[17453]: cluster 2026-03-10T11:45:45.422841+0000 mon.a (mon.0) 1093 : cluster [DBG] mgrmap e41: y(active, since 16s), standbys: x 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:46 vm05 bash[17453]: audit 2026-03-10T11:45:45.805613+0000 mon.c (mon.1) 99 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:46 vm05 bash[17453]: cluster 2026-03-10T11:45:46.320954+0000 mgr.y (mgr.24970) 34 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:46 vm05 bash[17453]: audit 2026-03-10T11:45:46.345203+0000 mon.a (mon.0) 1094 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:46.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:46 vm05 bash[17453]: audit 2026-03-10T11:45:46.353312+0000 mon.a (mon.0) 1095 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:47 vm07 bash[17804]: audit 2026-03-10T11:45:46.438670+0000 mon.a (mon.0) 1096 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:47 vm07 bash[17804]: audit 2026-03-10T11:45:46.453100+0000 mon.a (mon.0) 1097 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:47 vm07 bash[17804]: audit 2026-03-10T11:45:46.973502+0000 mon.a (mon.0) 1098 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:47 vm07 bash[17804]: audit 2026-03-10T11:45:47.086517+0000 mon.a (mon.0) 1099 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:47 vm05 bash[22470]: audit 2026-03-10T11:45:46.438670+0000 mon.a (mon.0) 1096 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:47 vm05 bash[22470]: audit 2026-03-10T11:45:46.453100+0000 mon.a (mon.0) 1097 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:47 vm05 bash[22470]: audit 2026-03-10T11:45:46.973502+0000 mon.a (mon.0) 1098 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:47 vm05 bash[22470]: audit 2026-03-10T11:45:47.086517+0000 mon.a (mon.0) 1099 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:47 vm05 bash[17453]: audit 2026-03-10T11:45:46.438670+0000 mon.a (mon.0) 1096 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:47 vm05 bash[17453]: audit 2026-03-10T11:45:46.453100+0000 mon.a (mon.0) 1097 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:47 vm05 bash[17453]: audit 2026-03-10T11:45:46.973502+0000 mon.a (mon.0) 1098 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:47.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:47 vm05 bash[17453]: audit 2026-03-10T11:45:47.086517+0000 mon.a (mon.0) 1099 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:49.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:49 vm05 bash[22470]: cluster 2026-03-10T11:45:48.321508+0000 mgr.y (mgr.24970) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:45:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:49 vm05 bash[17453]: cluster 2026-03-10T11:45:48.321508+0000 mgr.y (mgr.24970) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:45:49.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:45:48] "GET /metrics HTTP/1.1" 200 37552 "" "Prometheus/2.51.0" 2026-03-10T11:45:49.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:49 vm07 bash[17804]: cluster 2026-03-10T11:45:48.321508+0000 mgr.y (mgr.24970) 35 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:45:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:50 vm05 bash[22470]: audit 2026-03-10T11:45:48.907997+0000 mgr.y (mgr.24970) 36 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:50 vm05 bash[17453]: audit 2026-03-10T11:45:48.907997+0000 mgr.y (mgr.24970) 36 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:50.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:50 vm07 bash[17804]: audit 2026-03-10T11:45:48.907997+0000 mgr.y (mgr.24970) 36 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:45:51.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:51 vm05 bash[22470]: cluster 2026-03-10T11:45:50.321867+0000 mgr.y (mgr.24970) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:51 vm05 bash[17453]: cluster 2026-03-10T11:45:50.321867+0000 mgr.y (mgr.24970) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:51 vm07 bash[17804]: cluster 2026-03-10T11:45:50.321867+0000 mgr.y (mgr.24970) 37 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:52.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:52 vm07 bash[17804]: cluster 2026-03-10T11:45:52.322462+0000 mgr.y (mgr.24970) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:52.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:52 vm05 bash[22470]: cluster 2026-03-10T11:45:52.322462+0000 mgr.y (mgr.24970) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:52.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:52 vm05 bash[17453]: cluster 2026-03-10T11:45:52.322462+0000 mgr.y (mgr.24970) 38 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.802581+0000 mon.a (mon.0) 1100 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.809098+0000 mon.a (mon.0) 1101 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.811688+0000 mon.c (mon.1) 100 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.812638+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.816809+0000 mon.a (mon.0) 1102 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.860935+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.862609+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.863432+0000 mgr.y (mgr.24970) 39 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.870118+0000 mon.a (mon.0) 1103 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.872595+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.872827+0000 mon.a (mon.0) 1104 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.877033+0000 mon.a (mon.0) 1105 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.878956+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.879211+0000 mon.a (mon.0) 1106 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.883613+0000 mon.a (mon.0) 1107 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.885699+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.886563+0000 mgr.y (mgr.24970) 40 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.891333+0000 mon.a (mon.0) 1108 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.893790+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.894599+0000 mgr.y (mgr.24970) 41 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.900214+0000 mon.a (mon.0) 1109 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.901874+0000 mon.c (mon.1) 108 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.902718+0000 mgr.y (mgr.24970) 42 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.907285+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.909351+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.910219+0000 mgr.y (mgr.24970) 43 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.916393+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.917892+0000 mon.c (mon.1) 110 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.918639+0000 mgr.y (mgr.24970) 44 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.922401+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.925246+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.925939+0000 mgr.y (mgr.24970) 45 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.933444+0000 mon.a (mon.0) 1113 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.936372+0000 mon.c (mon.1) 112 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.937042+0000 mgr.y (mgr.24970) 46 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.940531+0000 mon.a (mon.0) 1114 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.943792+0000 mon.c (mon.1) 113 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.944499+0000 mgr.y (mgr.24970) 47 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.945734+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.946410+0000 mgr.y (mgr.24970) 48 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: audit 2026-03-10T11:45:53.947709+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cephadm 2026-03-10T11:45:53.948377+0000 mgr.y (mgr.24970) 49 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:54 vm05 bash[22470]: cluster 2026-03-10T11:45:54.322839+0000 mgr.y (mgr.24970) 50 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.802581+0000 mon.a (mon.0) 1100 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.809098+0000 mon.a (mon.0) 1101 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.811688+0000 mon.c (mon.1) 100 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.812638+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.816809+0000 mon.a (mon.0) 1102 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.860935+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.862609+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.863432+0000 mgr.y (mgr.24970) 39 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.870118+0000 mon.a (mon.0) 1103 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.872595+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T11:45:55.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.872827+0000 mon.a (mon.0) 1104 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.877033+0000 mon.a (mon.0) 1105 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.878956+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.879211+0000 mon.a (mon.0) 1106 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.883613+0000 mon.a (mon.0) 1107 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.885699+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.886563+0000 mgr.y (mgr.24970) 40 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.891333+0000 mon.a (mon.0) 1108 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.893790+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.894599+0000 mgr.y (mgr.24970) 41 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.900214+0000 mon.a (mon.0) 1109 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.901874+0000 mon.c (mon.1) 108 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.902718+0000 mgr.y (mgr.24970) 42 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.907285+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.909351+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.910219+0000 mgr.y (mgr.24970) 43 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.916393+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.917892+0000 mon.c (mon.1) 110 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.918639+0000 mgr.y (mgr.24970) 44 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.922401+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.925246+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.925939+0000 mgr.y (mgr.24970) 45 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.933444+0000 mon.a (mon.0) 1113 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.936372+0000 mon.c (mon.1) 112 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.937042+0000 mgr.y (mgr.24970) 46 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.940531+0000 mon.a (mon.0) 1114 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.943792+0000 mon.c (mon.1) 113 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.944499+0000 mgr.y (mgr.24970) 47 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.945734+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.946410+0000 mgr.y (mgr.24970) 48 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: audit 2026-03-10T11:45:53.947709+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cephadm 2026-03-10T11:45:53.948377+0000 mgr.y (mgr.24970) 49 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:45:55.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:54 vm05 bash[17453]: cluster 2026-03-10T11:45:54.322839+0000 mgr.y (mgr.24970) 50 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.802581+0000 mon.a (mon.0) 1100 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.809098+0000 mon.a (mon.0) 1101 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.811688+0000 mon.c (mon.1) 100 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:45:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.812638+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.816809+0000 mon.a (mon.0) 1102 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.860935+0000 mon.c (mon.1) 102 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.862609+0000 mon.c (mon.1) 103 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.863432+0000 mgr.y (mgr.24970) 39 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.870118+0000 mon.a (mon.0) 1103 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.872595+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.872827+0000 mon.a (mon.0) 1104 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.877033+0000 mon.a (mon.0) 1105 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.x"}]': finished 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.878956+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.879211+0000 mon.a (mon.0) 1106 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.883613+0000 mon.a (mon.0) 1107 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr.y"}]': finished 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.885699+0000 mon.c (mon.1) 106 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.886563+0000 mgr.y (mgr.24970) 40 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.891333+0000 mon.a (mon.0) 1108 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.893790+0000 mon.c (mon.1) 107 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.894599+0000 mgr.y (mgr.24970) 41 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.900214+0000 mon.a (mon.0) 1109 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.901874+0000 mon.c (mon.1) 108 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.902718+0000 mgr.y (mgr.24970) 42 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.907285+0000 mon.a (mon.0) 1110 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.909351+0000 mon.c (mon.1) 109 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.910219+0000 mgr.y (mgr.24970) 43 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.916393+0000 mon.a (mon.0) 1111 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.917892+0000 mon.c (mon.1) 110 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.918639+0000 mgr.y (mgr.24970) 44 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.922401+0000 mon.a (mon.0) 1112 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.925246+0000 mon.c (mon.1) 111 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.925939+0000 mgr.y (mgr.24970) 45 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.933444+0000 mon.a (mon.0) 1113 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.936372+0000 mon.c (mon.1) 112 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.937042+0000 mgr.y (mgr.24970) 46 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.940531+0000 mon.a (mon.0) 1114 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.943792+0000 mon.c (mon.1) 113 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.944499+0000 mgr.y (mgr.24970) 47 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.945734+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.946410+0000 mgr.y (mgr.24970) 48 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: audit 2026-03-10T11:45:53.947709+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cephadm 2026-03-10T11:45:53.948377+0000 mgr.y (mgr.24970) 49 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:45:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:54 vm07 bash[17804]: cluster 2026-03-10T11:45:54.322839+0000 mgr.y (mgr.24970) 50 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:56.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:56 vm05 bash[22470]: cephadm 2026-03-10T11:45:54.392418+0000 mgr.y (mgr.24970) 51 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T11:45:56.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:56 vm05 bash[22470]: cephadm 2026-03-10T11:45:54.428733+0000 mgr.y (mgr.24970) 52 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T11:45:56.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:56 vm05 bash[17453]: cephadm 2026-03-10T11:45:54.392418+0000 mgr.y (mgr.24970) 51 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T11:45:56.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:56 vm05 bash[17453]: cephadm 2026-03-10T11:45:54.428733+0000 mgr.y (mgr.24970) 52 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T11:45:56.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:56 vm07 bash[17804]: cephadm 2026-03-10T11:45:54.392418+0000 mgr.y (mgr.24970) 51 : cephadm [INF] Upgrade: Updating grafana.a 2026-03-10T11:45:56.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:56 vm07 bash[17804]: cephadm 2026-03-10T11:45:54.428733+0000 mgr.y (mgr.24970) 52 : cephadm [INF] Deploying daemon grafana.a on vm07 2026-03-10T11:45:57.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:57 vm07 bash[17804]: cluster 2026-03-10T11:45:56.323175+0000 mgr.y (mgr.24970) 53 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:57.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:57 vm05 bash[22470]: cluster 2026-03-10T11:45:56.323175+0000 mgr.y (mgr.24970) 53 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:57.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:57 vm05 bash[17453]: cluster 2026-03-10T11:45:56.323175+0000 mgr.y (mgr.24970) 53 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:45:59.166 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:45:59.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:45:59 vm05 bash[22470]: cluster 2026-03-10T11:45:58.323748+0000 mgr.y (mgr.24970) 54 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:59.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:45:59 vm05 bash[17453]: cluster 2026-03-10T11:45:58.323748+0000 mgr.y (mgr.24970) 54 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:59.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:45:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:45:58] "GET /metrics HTTP/1.1" 200 37552 "" "Prometheus/2.51.0" 2026-03-10T11:45:59.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:45:59 vm07 bash[17804]: cluster 2026-03-10T11:45:58.323748+0000 mgr.y (mgr.24970) 54 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:45:59.568 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:45:59.568 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (12m) 13s ago 19m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:45:59.568 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (12m) 13s ago 19m 39.9M - dad864ee21e9 ea7bd1695c30 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (21s) 13s ago 18m 41.3M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (18s) 13s ago 22m 288M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (9m) 13s ago 22m 518M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (22m) 13s ago 22m 69.3M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (22m) 13s ago 22m 53.6M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (22m) 13s ago 22m 51.8M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (12m) 13s ago 19m 7908k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (12m) 13s ago 19m 7715k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (21m) 13s ago 21m 53.0M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (21m) 13s ago 21m 55.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (21m) 13s ago 21m 51.6M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (21m) 13s ago 21m 54.6M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (20m) 13s ago 20m 54.8M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (20m) 13s ago 20m 51.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (20m) 13s ago 20m 50.1M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (20m) 13s ago 20m 52.9M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (20s) 13s ago 19m 42.5M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (18m) 13s ago 18m 87.3M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:45:59.569 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (18m) 13s ago 18m 88.0M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:45:59.815 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:46:00.023 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mgr", 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "mgr" 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "2/2 daemons upgraded", 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading grafana daemons", 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:46:00.024 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:46:00.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:00 vm05 bash[22470]: audit 2026-03-10T11:45:58.918356+0000 mgr.y (mgr.24970) 55 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:00.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:00 vm05 bash[22470]: audit 2026-03-10T11:45:59.157623+0000 mgr.y (mgr.24970) 56 : audit [DBG] from='client.25129 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:00.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:00 vm05 bash[22470]: audit 2026-03-10T11:45:59.817696+0000 mon.c (mon.1) 116 : audit [DBG] from='client.? 192.168.123.105:0/1603010952' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:00.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:00 vm05 bash[17453]: audit 2026-03-10T11:45:58.918356+0000 mgr.y (mgr.24970) 55 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:00.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:00 vm05 bash[17453]: audit 2026-03-10T11:45:59.157623+0000 mgr.y (mgr.24970) 56 : audit [DBG] from='client.25129 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:00.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:00 vm05 bash[17453]: audit 2026-03-10T11:45:59.817696+0000 mon.c (mon.1) 116 : audit [DBG] from='client.? 192.168.123.105:0/1603010952' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:00.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:00 vm07 bash[17804]: audit 2026-03-10T11:45:58.918356+0000 mgr.y (mgr.24970) 55 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:00.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:00 vm07 bash[17804]: audit 2026-03-10T11:45:59.157623+0000 mgr.y (mgr.24970) 56 : audit [DBG] from='client.25129 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:00.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:00 vm07 bash[17804]: audit 2026-03-10T11:45:59.817696+0000 mon.c (mon.1) 116 : audit [DBG] from='client.? 192.168.123.105:0/1603010952' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:01 vm05 bash[22470]: audit 2026-03-10T11:45:59.366531+0000 mgr.y (mgr.24970) 57 : audit [DBG] from='client.25135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:01 vm05 bash[22470]: audit 2026-03-10T11:45:59.567268+0000 mgr.y (mgr.24970) 58 : audit [DBG] from='client.25138 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:01 vm05 bash[22470]: audit 2026-03-10T11:46:00.026536+0000 mgr.y (mgr.24970) 59 : audit [DBG] from='client.15234 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:01 vm05 bash[22470]: cluster 2026-03-10T11:46:00.324092+0000 mgr.y (mgr.24970) 60 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:01 vm05 bash[22470]: audit 2026-03-10T11:46:00.806050+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:01 vm05 bash[17453]: audit 2026-03-10T11:45:59.366531+0000 mgr.y (mgr.24970) 57 : audit [DBG] from='client.25135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:01 vm05 bash[17453]: audit 2026-03-10T11:45:59.567268+0000 mgr.y (mgr.24970) 58 : audit [DBG] from='client.25138 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:01 vm05 bash[17453]: audit 2026-03-10T11:46:00.026536+0000 mgr.y (mgr.24970) 59 : audit [DBG] from='client.15234 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:01 vm05 bash[17453]: cluster 2026-03-10T11:46:00.324092+0000 mgr.y (mgr.24970) 60 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:01.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:01 vm05 bash[17453]: audit 2026-03-10T11:46:00.806050+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:01 vm07 bash[17804]: audit 2026-03-10T11:45:59.366531+0000 mgr.y (mgr.24970) 57 : audit [DBG] from='client.25135 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.466 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:01 vm07 bash[17804]: audit 2026-03-10T11:45:59.567268+0000 mgr.y (mgr.24970) 58 : audit [DBG] from='client.25138 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.466 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:01 vm07 bash[17804]: audit 2026-03-10T11:46:00.026536+0000 mgr.y (mgr.24970) 59 : audit [DBG] from='client.15234 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:01.466 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:01 vm07 bash[17804]: cluster 2026-03-10T11:46:00.324092+0000 mgr.y (mgr.24970) 60 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:01.466 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:01 vm07 bash[17804]: audit 2026-03-10T11:46:00.806050+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:02.777 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:02 vm07 bash[17804]: cluster 2026-03-10T11:46:02.324875+0000 mgr.y (mgr.24970) 61 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:02.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:02 vm05 bash[22470]: cluster 2026-03-10T11:46:02.324875+0000 mgr.y (mgr.24970) 61 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:02.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:02 vm05 bash[17453]: cluster 2026-03-10T11:46:02.324875+0000 mgr.y (mgr.24970) 61 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:04.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.195 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.195 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: Stopping Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:46:04.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:03 vm07 bash[37956]: t=2026-03-10T11:46:03+0000 lvl=info msg="Shutdown started" logger=server reason="System signal: terminated" 2026-03-10T11:46:04.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44714]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-grafana-a 2026-03-10T11:46:04.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@grafana.a.service: Deactivated successfully. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: Stopped Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:46:03 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 systemd[1]: Started Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:46:04.534 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534330295Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-10T11:46:04Z 2026-03-10T11:46:04.534 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534686471Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-10T11:46:04.534 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534748057Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534786028Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534818198Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534840139Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534873501Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534901193Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534927222Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534948121Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.534981594Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.53500604Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.535030706Z level=info msg=Target target=[all] 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.53505407Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.535087552Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.535111577Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.535135572Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.535159026Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=settings t=2026-03-10T11:46:04.535202056Z level=info msg="App mode production" 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=sqlstore t=2026-03-10T11:46:04.535424452Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-10T11:46:04.535 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=sqlstore t=2026-03-10T11:46:04.535475939Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-10T11:46:04.784 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.543568534Z level=info msg="Starting DB migrations" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.568056254Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.589664144Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=21.601188ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.591451218Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.594076072Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.624993ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.595253425Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.595424335Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=170.94µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.596339377Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.596905336Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=565.859µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.598322699Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.600576527Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.252928ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.601733352Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.601884394Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=150.812µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.602794929Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.605022227Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.226767ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.60641817Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.608559278Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.139555ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.610291339Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.612500854Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.208774ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.613785949Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.616026843Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.240934ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.61702951Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.619248673Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.216599ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.620367246Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.620490848Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=123.912µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.621570188Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.62217001Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=600.774µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.623488618Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.624049257Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=560.801µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.625400877Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.625497367Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=96.461µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.626639195Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.628883084Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=2.242528ms 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.629992521Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.630158542Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=165.98µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.631066369Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.631673406Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=607.037µs 2026-03-10T11:46:04.785 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.633039333Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.635302378Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=2.262134ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.636476095Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.6371467Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=669.835µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.638586595Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.63868566Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=99.095µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.639670372Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.642170141Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=2.498415ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.643354297Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.645641467Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=2.288443ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.648003198Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.650840288Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=2.834715ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.652329565Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.655265369Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=2.934661ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.656713259Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.656832182Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=119.184µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.66458945Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.671601553Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.005681ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.673159168Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.675605566Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=2.446498ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.676907784Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.677048557Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=141.574µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.678235508Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.680555009Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=2.318148ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.681740058Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.684324495Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=2.58061ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.685420706Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.685909702Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=488.194µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.687338425Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.688015453Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=672.729µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.689442764Z level=info msg="Executing migration" id="create alert_image table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.690084164Z level=info msg="Migration successfully executed" id="create alert_image table" duration=641.159µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.691503611Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.692147967Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=642.854µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.693505828Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.693570809Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=65.643µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.695080675Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.695633941Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=553.346µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.69698005Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.697718292Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=738.513µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.698881148Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.699119725Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.700024508Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.700603001Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=578.773µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.701704773Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.702320384Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=615.632µs 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.703224857Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.705683249Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=2.457469ms 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.706816389Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-10T11:46:04.786 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.706899425Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=82.865µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.707844253Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.7079178Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=73.518µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.70892731Z level=info msg="Executing migration" id="create secrets table" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.709548853Z level=info msg="Migration successfully executed" id="create secrets table" duration=620.771µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.710902396Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.722496936Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=11.589101ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.723968259Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.726808124Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.844443ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.728263048Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.728508897Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=244.106µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.729687423Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.741044899Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=11.351594ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.742711549Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.753584077Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=10.857751ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.755053567Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.757451505Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.392598ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.758645059Z level=info msg="Executing migration" id="permission kind migration" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.760982784Z level=info msg="Migration successfully executed" id="permission kind migration" duration=2.335812ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.762194181Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.764457877Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.263716ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.766225175Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.768426906Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.201289ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.769388565Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.769964814Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=574.405µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.771183234Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.771667901Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=484.376µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.772835987Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.773397027Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=560.971µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.774524818Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.776141022Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.615833ms 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.777142728Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.777705851Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=563.264µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.779091204Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.779174661Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=83.656µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.780224345Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.78026964Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=45.454µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.782150179Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.782531903Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=382.285µs 2026-03-10T11:46:04.787 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.783721489Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-10T11:46:05.038 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.78616354Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=2.442301ms 2026-03-10T11:46:05.038 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.787872278Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-10T11:46:05.038 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.789773426Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.901709ms 2026-03-10T11:46:05.038 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.791087146Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-10T11:46:05.038 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.791310513Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=223.057µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.792202934Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.792517402Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=314.168µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.793698272Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.794255184Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=556.261µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.795702192Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.796253103Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=552.354µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.797625632Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.800147622Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.52198ms 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.80127916Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.801384718Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=106.159µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.802500366Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.803038774Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=538.088µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.80429232Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.804811Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=518.69µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.80606124Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.80669698Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=635.63µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.807901555Z level=info msg="Executing migration" id="add correlation config column" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.810305374Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.403188ms 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.811419528Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.81190649Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=488.804µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.812813007Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.813324964Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=511.918µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.814425394Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.821138558Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.711873ms 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.822390401Z level=info msg="Executing migration" id="create correlation v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.822968453Z level=info msg="Migration successfully executed" id="create correlation v2" duration=576.068µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.82391301Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.824399711Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=486.681µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.825638739Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.82618357Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=544.8µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.827272328Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.827745142Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=472.575µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.828831746Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.829035768Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=203.771µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.830080052Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.830510648Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=430.505µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.831368022Z level=info msg="Executing migration" id="add provisioning column" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.833762855Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.394421ms 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.834757045Z level=info msg="Executing migration" id="create entity_events table" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.835173916Z level=info msg="Migration successfully executed" id="create entity_events table" duration=416.93µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.836021071Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.836545362Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=522.146µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.837742282Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.83797667Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.838980239Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.839204708Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.840431876Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.840911824Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=479.878µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.841971387Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.84252326Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=550.439µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.843385284Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.843881401Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=496.348µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.845110512Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.845698072Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=587.329µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.846944044Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.847429893Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=484.096µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.848250599Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.848773597Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=522.798µs 2026-03-10T11:46:05.039 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.84979559Z level=info msg="Executing migration" id="Drop public config table" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.850257044Z level=info msg="Migration successfully executed" id="Drop public config table" duration=460.771µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.851207552Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.851836971Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=629.198µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.852731946Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.853360322Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=627.995µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.854471381Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.854969593Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=498.533µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.85579022Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.856292661Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=502.36µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.857469884Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.864038236Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=6.566398ms 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.865264211Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.867755473Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.491142ms 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.868906537Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.871271434Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.364977ms 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.872368276Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.872568341Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=200.255µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.873626391Z level=info msg="Executing migration" id="add share column" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.875987489Z level=info msg="Migration successfully executed" id="add share column" duration=2.361109ms 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.876968025Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.877138755Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=170.709µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.878176667Z level=info msg="Executing migration" id="create file table" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.878633962Z level=info msg="Migration successfully executed" id="create file table" duration=457.005µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.880208689Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.880700661Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=493.735µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.881868687Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.882353915Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=475.02µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.883524445Z level=info msg="Executing migration" id="create file_meta table" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.883919205Z level=info msg="Migration successfully executed" id="create file_meta table" duration=393.267µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.885078955Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.885579191Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=500.456µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.886760371Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.886835532Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=77.706µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.887709889Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.887808123Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=98.013µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.888624852Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.890046292Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.421571ms 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.891331166Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.892036075Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=704.819µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.892967298Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.893687816Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=720.348µs 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.894695862Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.897349649Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.654608ms 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.898481999Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-10T11:46:05.040 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.898686882Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=204.913µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.899559455Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.900090971Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=531.255µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.901376937Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.901638927Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=262.171µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.902657624Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.903137121Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=479.547µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.904010626Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.904317039Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=306.754µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.905213237Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.907787615Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.573516ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.908971881Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.91168041Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.709922ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.912891316Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.913467635Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=576.499µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.914567454Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.939957933Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=25.38598ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.94162916Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.942224375Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=596.497µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.943301831Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.943810533Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=508.581µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.945115636Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.95546808Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=10.349789ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.957078685Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.95987086Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=2.792005ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.961295827Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.961474552Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=179.115µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.962504369Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.962637398Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=133.861µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.963444268Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.96399547Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=551.561µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.965068848Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.967649778Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=2.577473ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.968811953Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.968987202Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=175.761µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.970078996Z level=info msg="Executing migration" id="create folder table" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.970703855Z level=info msg="Migration successfully executed" id="create folder table" duration=624.969µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.973758091Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.974787657Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.030748ms 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.976168422Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.976919538Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=751.346µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.978158176Z level=info msg="Executing migration" id="Update folder title length" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.978194453Z level=info msg="Migration successfully executed" id="Update folder title length" duration=36.428µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.979128272Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.979751038Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=622.946µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.980954791Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.98157976Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=623.807µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.982636167Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.983249505Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=612.236µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.984585225Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.984972991Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=387.905µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.986053954Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.986267694Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=213.399µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.987127182Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.987713399Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=586.137µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.988507345Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.989119771Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=612.316µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.990185926Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.990837486Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=651.509µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.991635971Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.992254428Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=616.503µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.993117363Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.993784341Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=668.831µs 2026-03-10T11:46:05.041 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.994810642Z level=info msg="Executing migration" id="create anon_device table" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.995345744Z level=info msg="Migration successfully executed" id="create anon_device table" duration=535.323µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.996224417Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.996939827Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=714.749µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.998349104Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:04 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:04.999079953Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=730.699µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.000380797Z level=info msg="Executing migration" id="create signing_key table" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.000883377Z level=info msg="Migration successfully executed" id="create signing_key table" duration=502.971µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.002183821Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.002762634Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=579.053µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.004396051Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.005008798Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=613.228µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.00612082Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.006399761Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=279.373µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.007306249Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.009997735Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.690696ms 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.011114155Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.012244811Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.131507ms 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.014626008Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.015400858Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=775.842µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.016253344Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.016836434Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=583.542µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.01823942Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.018852127Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=613.9µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.019694082Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.020266634Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=573.905µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.021073315Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.021755451Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=681.936µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.022766303Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.023403304Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=635.699µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.024328115Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.024943668Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=615.623µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.025986459Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.026288104Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=301.734µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.027162761Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.027210441Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=47.92µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.028057795Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.031014881Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.958116ms 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.032024571Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.034801147Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.776046ms 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.03583426Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.036009538Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=175.368µs 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=migrator t=2026-03-10T11:46:05.03683852Z level=info msg="migrations completed" performed=169 skipped=378 duration=468.868468ms 2026-03-10T11:46:05.042 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=sqlstore t=2026-03-10T11:46:05.037351359Z level=info msg="Created default organization" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=secrets t=2026-03-10T11:46:05.040166397Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=plugin.store t=2026-03-10T11:46:05.050507491Z level=info msg="Loading plugins..." 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=local.finder t=2026-03-10T11:46:05.093076602Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=plugin.store t=2026-03-10T11:46:05.093136814Z level=info msg="Plugins loaded" count=55 duration=42.629734ms 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=query_data t=2026-03-10T11:46:05.094851844Z level=info msg="Query Service initialization" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=live.push_http t=2026-03-10T11:46:05.096652935Z level=info msg="Live Push Gateway initialization" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert.migration t=2026-03-10T11:46:05.098596522Z level=info msg=Starting 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert t=2026-03-10T11:46:05.10288562Z level=warn msg="Unexpected number of rows updating alert configuration history" rows=0 org=1 hash=not-yet-calculated 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert.state.manager t=2026-03-10T11:46:05.103508836Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=infra.usagestats.collector t=2026-03-10T11:46:05.104497175Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=provisioning.datasources t=2026-03-10T11:46:05.107335228Z level=info msg="deleted datasource based on configuration" name=Dashboard1 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=provisioning.datasources t=2026-03-10T11:46:05.107548927Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=provisioning.alerting t=2026-03-10T11:46:05.119661407Z level=info msg="starting to provision alerting" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=provisioning.alerting t=2026-03-10T11:46:05.119717141Z level=info msg="finished to provision alerting" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=grafanaStorageLogger t=2026-03-10T11:46:05.120016611Z level=info msg="Storage starting" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=http.server t=2026-03-10T11:46:05.124340404Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=http.server t=2026-03-10T11:46:05.124974861Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert.state.manager t=2026-03-10T11:46:05.125226362Z level=info msg="Warming state cache for startup" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert.state.manager t=2026-03-10T11:46:05.129267826Z level=info msg="State cache has been initialized" states=0 duration=4.040742ms 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=provisioning.dashboard t=2026-03-10T11:46:05.131333882Z level=info msg="starting to provision dashboards" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert.multiorg.alertmanager t=2026-03-10T11:46:05.147389611Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ngalert.scheduler t=2026-03-10T11:46:05.147608361Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=ticker t=2026-03-10T11:46:05.147651633Z level=info msg=starting first_tick=2026-03-10T11:46:10Z 2026-03-10T11:46:05.324 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=plugins.update.checker t=2026-03-10T11:46:05.223868645Z level=info msg="Update check succeeded" duration=77.35901ms 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:05 vm05 bash[22470]: cluster 2026-03-10T11:46:04.325177+0000 mgr.y (mgr.24970) 62 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:05 vm05 bash[22470]: audit 2026-03-10T11:46:04.336091+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:05 vm05 bash[22470]: audit 2026-03-10T11:46:04.343956+0000 mon.a (mon.0) 1116 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:05 vm05 bash[22470]: audit 2026-03-10T11:46:04.346474+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:05 vm05 bash[17453]: cluster 2026-03-10T11:46:04.325177+0000 mgr.y (mgr.24970) 62 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:05 vm05 bash[17453]: audit 2026-03-10T11:46:04.336091+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:05 vm05 bash[17453]: audit 2026-03-10T11:46:04.343956+0000 mon.a (mon.0) 1116 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:05.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:05 vm05 bash[17453]: audit 2026-03-10T11:46:04.346474+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:05.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:05 vm07 bash[17804]: cluster 2026-03-10T11:46:04.325177+0000 mgr.y (mgr.24970) 62 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:05.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:05 vm07 bash[17804]: audit 2026-03-10T11:46:04.336091+0000 mon.a (mon.0) 1115 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:05.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:05 vm07 bash[17804]: audit 2026-03-10T11:46:04.343956+0000 mon.a (mon.0) 1116 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:05.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:05 vm07 bash[17804]: audit 2026-03-10T11:46:04.346474+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:05.695 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=provisioning.dashboard t=2026-03-10T11:46:05.325421847Z level=info msg="finished to provision dashboards" 2026-03-10T11:46:05.695 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=grafana-apiserver t=2026-03-10T11:46:05.351954625Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-10T11:46:05.695 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:05 vm07 bash[44829]: logger=grafana-apiserver t=2026-03-10T11:46:05.352283872Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-10T11:46:07.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:07 vm07 bash[17804]: cluster 2026-03-10T11:46:06.325629+0000 mgr.y (mgr.24970) 63 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:07.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:07 vm05 bash[17453]: cluster 2026-03-10T11:46:06.325629+0000 mgr.y (mgr.24970) 63 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:07.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:07 vm05 bash[22470]: cluster 2026-03-10T11:46:06.325629+0000 mgr.y (mgr.24970) 63 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:09.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:46:08] "GET /metrics HTTP/1.1" 200 37542 "" "Prometheus/2.51.0" 2026-03-10T11:46:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:09 vm07 bash[17804]: cluster 2026-03-10T11:46:08.326218+0000 mgr.y (mgr.24970) 64 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:09.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:09 vm05 bash[22470]: cluster 2026-03-10T11:46:08.326218+0000 mgr.y (mgr.24970) 64 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:09.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:09 vm05 bash[17453]: cluster 2026-03-10T11:46:08.326218+0000 mgr.y (mgr.24970) 64 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:10 vm07 bash[17804]: audit 2026-03-10T11:46:08.924036+0000 mgr.y (mgr.24970) 65 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:10 vm07 bash[17804]: audit 2026-03-10T11:46:09.863359+0000 mon.a (mon.0) 1117 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:10 vm07 bash[17804]: audit 2026-03-10T11:46:09.871117+0000 mon.a (mon.0) 1118 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:10.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:10 vm05 bash[22470]: audit 2026-03-10T11:46:08.924036+0000 mgr.y (mgr.24970) 65 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:10.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:10 vm05 bash[22470]: audit 2026-03-10T11:46:09.863359+0000 mon.a (mon.0) 1117 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:10.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:10 vm05 bash[22470]: audit 2026-03-10T11:46:09.871117+0000 mon.a (mon.0) 1118 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:10.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:10 vm05 bash[17453]: audit 2026-03-10T11:46:08.924036+0000 mgr.y (mgr.24970) 65 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:10.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:10 vm05 bash[17453]: audit 2026-03-10T11:46:09.863359+0000 mon.a (mon.0) 1117 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:10.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:10 vm05 bash[17453]: audit 2026-03-10T11:46:09.871117+0000 mon.a (mon.0) 1118 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:11 vm07 bash[17804]: cluster 2026-03-10T11:46:10.326561+0000 mgr.y (mgr.24970) 66 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:11 vm07 bash[17804]: audit 2026-03-10T11:46:10.497349+0000 mon.a (mon.0) 1119 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:11 vm07 bash[17804]: audit 2026-03-10T11:46:10.503848+0000 mon.a (mon.0) 1120 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:11.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:11 vm05 bash[22470]: cluster 2026-03-10T11:46:10.326561+0000 mgr.y (mgr.24970) 66 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:11.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:11 vm05 bash[22470]: audit 2026-03-10T11:46:10.497349+0000 mon.a (mon.0) 1119 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:11.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:11 vm05 bash[22470]: audit 2026-03-10T11:46:10.503848+0000 mon.a (mon.0) 1120 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:11.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:11 vm05 bash[17453]: cluster 2026-03-10T11:46:10.326561+0000 mgr.y (mgr.24970) 66 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:11.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:11 vm05 bash[17453]: audit 2026-03-10T11:46:10.497349+0000 mon.a (mon.0) 1119 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:11.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:11 vm05 bash[17453]: audit 2026-03-10T11:46:10.503848+0000 mon.a (mon.0) 1120 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:12.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:12 vm07 bash[17804]: cluster 2026-03-10T11:46:12.327343+0000 mgr.y (mgr.24970) 67 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:12 vm05 bash[22470]: cluster 2026-03-10T11:46:12.327343+0000 mgr.y (mgr.24970) 67 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:12 vm05 bash[17453]: cluster 2026-03-10T11:46:12.327343+0000 mgr.y (mgr.24970) 67 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:14.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:14 vm07 bash[17804]: cluster 2026-03-10T11:46:14.327801+0000 mgr.y (mgr.24970) 68 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:14 vm05 bash[17453]: cluster 2026-03-10T11:46:14.327801+0000 mgr.y (mgr.24970) 68 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:14 vm05 bash[22470]: cluster 2026-03-10T11:46:14.327801+0000 mgr.y (mgr.24970) 68 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:15 vm07 bash[17804]: audit 2026-03-10T11:46:15.806220+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:16.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:15 vm05 bash[22470]: audit 2026-03-10T11:46:15.806220+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:16.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:15 vm05 bash[17453]: audit 2026-03-10T11:46:15.806220+0000 mon.c (mon.1) 119 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:17.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:16 vm07 bash[17804]: cluster 2026-03-10T11:46:16.328226+0000 mgr.y (mgr.24970) 69 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:16 vm05 bash[22470]: cluster 2026-03-10T11:46:16.328226+0000 mgr.y (mgr.24970) 69 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:17.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:16 vm05 bash[17453]: cluster 2026-03-10T11:46:16.328226+0000 mgr.y (mgr.24970) 69 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:18.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.344382+0000 mon.a (mon.0) 1121 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.353815+0000 mon.a (mon.0) 1122 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.356797+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.357392+0000 mon.c (mon.1) 121 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.361575+0000 mon.a (mon.0) 1123 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.377346+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.411520+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.413036+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.414077+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.415062+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.416144+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.417018+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.417737+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.418571+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.419577+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.420478+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.421247+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.422035+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.422856+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.423576+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.424332+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.432509+0000 mon.a (mon.0) 1124 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.436745+0000 mon.c (mon.1) 138 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.437118+0000 mon.a (mon.0) 1125 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.442166+0000 mon.a (mon.0) 1126 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.443040+0000 mon.c (mon.1) 139 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.443344+0000 mon.a (mon.0) 1127 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.448365+0000 mon.a (mon.0) 1128 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.449444+0000 mon.c (mon.1) 140 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.449787+0000 mon.a (mon.0) 1129 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.455554+0000 mon.a (mon.0) 1130 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.456307+0000 mon.c (mon.1) 141 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.456596+0000 mon.a (mon.0) 1131 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.457041+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.457297+0000 mon.a (mon.0) 1132 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.461749+0000 mon.a (mon.0) 1133 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.462255+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.462490+0000 mon.a (mon.0) 1134 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.462954+0000 mon.c (mon.1) 144 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.463201+0000 mon.a (mon.0) 1135 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.466533+0000 mon.a (mon.0) 1136 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.467462+0000 mon.c (mon.1) 145 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.467689+0000 mon.a (mon.0) 1137 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.468161+0000 mon.c (mon.1) 146 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.468378+0000 mon.a (mon.0) 1138 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.471708+0000 mon.a (mon.0) 1139 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.472599+0000 mon.c (mon.1) 147 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.472817+0000 mon.a (mon.0) 1140 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.473275+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.473524+0000 mon.a (mon.0) 1141 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.476768+0000 mon.a (mon.0) 1142 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.477775+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.477989+0000 mon.a (mon.0) 1143 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.481115+0000 mon.a (mon.0) 1144 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.482066+0000 mon.c (mon.1) 150 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.482283+0000 mon.a (mon.0) 1145 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.482757+0000 mon.c (mon.1) 151 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.482970+0000 mon.a (mon.0) 1146 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.483413+0000 mon.c (mon.1) 152 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.483668+0000 mon.a (mon.0) 1147 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.484104+0000 mon.c (mon.1) 153 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.484309+0000 mon.a (mon.0) 1148 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.484739+0000 mon.c (mon.1) 154 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.484952+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.485393+0000 mon.c (mon.1) 155 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.485616+0000 mon.a (mon.0) 1150 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.486263+0000 mon.c (mon.1) 156 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.486522+0000 mon.a (mon.0) 1151 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.489693+0000 mon.a (mon.0) 1152 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:18.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:18 vm07 bash[17804]: audit 2026-03-10T11:46:17.491327+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.344382+0000 mon.a (mon.0) 1121 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.353815+0000 mon.a (mon.0) 1122 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.356797+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.357392+0000 mon.c (mon.1) 121 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.361575+0000 mon.a (mon.0) 1123 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.377346+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.411520+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.413036+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.414077+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.415062+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.416144+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.417018+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.417737+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.418571+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.419577+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.420478+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.421247+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.422035+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.422856+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.423576+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.424332+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.432509+0000 mon.a (mon.0) 1124 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.436745+0000 mon.c (mon.1) 138 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.437118+0000 mon.a (mon.0) 1125 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.442166+0000 mon.a (mon.0) 1126 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.443040+0000 mon.c (mon.1) 139 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.443344+0000 mon.a (mon.0) 1127 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.448365+0000 mon.a (mon.0) 1128 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.449444+0000 mon.c (mon.1) 140 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.449787+0000 mon.a (mon.0) 1129 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.455554+0000 mon.a (mon.0) 1130 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:18.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.456307+0000 mon.c (mon.1) 141 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.456596+0000 mon.a (mon.0) 1131 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.457041+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.457297+0000 mon.a (mon.0) 1132 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.461749+0000 mon.a (mon.0) 1133 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.462255+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.462490+0000 mon.a (mon.0) 1134 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.462954+0000 mon.c (mon.1) 144 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.463201+0000 mon.a (mon.0) 1135 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.466533+0000 mon.a (mon.0) 1136 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.467462+0000 mon.c (mon.1) 145 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.467689+0000 mon.a (mon.0) 1137 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.468161+0000 mon.c (mon.1) 146 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.468378+0000 mon.a (mon.0) 1138 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.471708+0000 mon.a (mon.0) 1139 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.472599+0000 mon.c (mon.1) 147 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.472817+0000 mon.a (mon.0) 1140 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.473275+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.473524+0000 mon.a (mon.0) 1141 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.476768+0000 mon.a (mon.0) 1142 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.477775+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.477989+0000 mon.a (mon.0) 1143 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.481115+0000 mon.a (mon.0) 1144 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.482066+0000 mon.c (mon.1) 150 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.482283+0000 mon.a (mon.0) 1145 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.482757+0000 mon.c (mon.1) 151 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.482970+0000 mon.a (mon.0) 1146 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.483413+0000 mon.c (mon.1) 152 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.483668+0000 mon.a (mon.0) 1147 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.484104+0000 mon.c (mon.1) 153 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.484309+0000 mon.a (mon.0) 1148 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.484739+0000 mon.c (mon.1) 154 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.484952+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.485393+0000 mon.c (mon.1) 155 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.485616+0000 mon.a (mon.0) 1150 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.486263+0000 mon.c (mon.1) 156 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.486522+0000 mon.a (mon.0) 1151 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.489693+0000 mon.a (mon.0) 1152 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:18 vm05 bash[22470]: audit 2026-03-10T11:46:17.491327+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.344382+0000 mon.a (mon.0) 1121 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.353815+0000 mon.a (mon.0) 1122 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.356797+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.357392+0000 mon.c (mon.1) 121 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.361575+0000 mon.a (mon.0) 1123 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.377346+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.411520+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.413036+0000 mon.c (mon.1) 124 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.414077+0000 mon.c (mon.1) 125 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.415062+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.416144+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.417018+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.417737+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.418571+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.419577+0000 mon.c (mon.1) 131 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.420478+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.421247+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.422035+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.422856+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.423576+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.424332+0000 mon.c (mon.1) 137 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.432509+0000 mon.a (mon.0) 1124 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.436745+0000 mon.c (mon.1) 138 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.437118+0000 mon.a (mon.0) 1125 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.442166+0000 mon.a (mon.0) 1126 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.443040+0000 mon.c (mon.1) 139 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.443344+0000 mon.a (mon.0) 1127 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.448365+0000 mon.a (mon.0) 1128 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.449444+0000 mon.c (mon.1) 140 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.449787+0000 mon.a (mon.0) 1129 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.455554+0000 mon.a (mon.0) 1130 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.456307+0000 mon.c (mon.1) 141 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.456596+0000 mon.a (mon.0) 1131 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.457041+0000 mon.c (mon.1) 142 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.457297+0000 mon.a (mon.0) 1132 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.461749+0000 mon.a (mon.0) 1133 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.462255+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.462490+0000 mon.a (mon.0) 1134 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.462954+0000 mon.c (mon.1) 144 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.463201+0000 mon.a (mon.0) 1135 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.466533+0000 mon.a (mon.0) 1136 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.467462+0000 mon.c (mon.1) 145 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.467689+0000 mon.a (mon.0) 1137 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.468161+0000 mon.c (mon.1) 146 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.468378+0000 mon.a (mon.0) 1138 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.471708+0000 mon.a (mon.0) 1139 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.472599+0000 mon.c (mon.1) 147 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.472817+0000 mon.a (mon.0) 1140 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.473275+0000 mon.c (mon.1) 148 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.473524+0000 mon.a (mon.0) 1141 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.476768+0000 mon.a (mon.0) 1142 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.477775+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.477989+0000 mon.a (mon.0) 1143 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.481115+0000 mon.a (mon.0) 1144 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.482066+0000 mon.c (mon.1) 150 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.482283+0000 mon.a (mon.0) 1145 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.482757+0000 mon.c (mon.1) 151 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.482970+0000 mon.a (mon.0) 1146 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.483413+0000 mon.c (mon.1) 152 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.483668+0000 mon.a (mon.0) 1147 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.484104+0000 mon.c (mon.1) 153 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.484309+0000 mon.a (mon.0) 1148 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.484739+0000 mon.c (mon.1) 154 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.484952+0000 mon.a (mon.0) 1149 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.485393+0000 mon.c (mon.1) 155 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.485616+0000 mon.a (mon.0) 1150 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.486263+0000 mon.c (mon.1) 156 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.486522+0000 mon.a (mon.0) 1151 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.489693+0000 mon.a (mon.0) 1152 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:18.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:18 vm05 bash[17453]: audit 2026-03-10T11:46:17.491327+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:19.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:46:18] "GET /metrics HTTP/1.1" 200 37549 "" "Prometheus/2.51.0" 2026-03-10T11:46:19.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:19 vm07 bash[17804]: audit 2026-03-10T11:46:17.377717+0000 mgr.y (mgr.24970) 70 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:46:19.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:19 vm07 bash[17804]: cephadm 2026-03-10T11:46:17.424847+0000 mgr.y (mgr.24970) 71 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:19.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:19 vm07 bash[17804]: cephadm 2026-03-10T11:46:17.486005+0000 mgr.y (mgr.24970) 72 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:19.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:19 vm07 bash[17804]: cluster 2026-03-10T11:46:18.328968+0000 mgr.y (mgr.24970) 73 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:19 vm05 bash[17453]: audit 2026-03-10T11:46:17.377717+0000 mgr.y (mgr.24970) 70 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:19 vm05 bash[17453]: cephadm 2026-03-10T11:46:17.424847+0000 mgr.y (mgr.24970) 71 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:19 vm05 bash[17453]: cephadm 2026-03-10T11:46:17.486005+0000 mgr.y (mgr.24970) 72 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:19 vm05 bash[17453]: cluster 2026-03-10T11:46:18.328968+0000 mgr.y (mgr.24970) 73 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:19 vm05 bash[22470]: audit 2026-03-10T11:46:17.377717+0000 mgr.y (mgr.24970) 70 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:19 vm05 bash[22470]: cephadm 2026-03-10T11:46:17.424847+0000 mgr.y (mgr.24970) 71 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:19 vm05 bash[22470]: cephadm 2026-03-10T11:46:17.486005+0000 mgr.y (mgr.24970) 72 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:19.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:19 vm05 bash[22470]: cluster 2026-03-10T11:46:18.328968+0000 mgr.y (mgr.24970) 73 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:20 vm07 bash[17804]: audit 2026-03-10T11:46:18.932691+0000 mgr.y (mgr.24970) 74 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:20 vm07 bash[17804]: cluster 2026-03-10T11:46:20.329300+0000 mgr.y (mgr.24970) 75 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:20.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:20 vm05 bash[22470]: audit 2026-03-10T11:46:18.932691+0000 mgr.y (mgr.24970) 74 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:20.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:20 vm05 bash[22470]: cluster 2026-03-10T11:46:20.329300+0000 mgr.y (mgr.24970) 75 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:20.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:20 vm05 bash[17453]: audit 2026-03-10T11:46:18.932691+0000 mgr.y (mgr.24970) 74 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:20.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:20 vm05 bash[17453]: cluster 2026-03-10T11:46:20.329300+0000 mgr.y (mgr.24970) 75 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:22.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:21 vm05 bash[17453]: audit 2026-03-10T11:46:20.809025+0000 mon.a (mon.0) 1153 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:21 vm05 bash[22470]: audit 2026-03-10T11:46:20.809025+0000 mon.a (mon.0) 1153 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:22.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:21 vm07 bash[17804]: audit 2026-03-10T11:46:20.809025+0000 mon.a (mon.0) 1153 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:23.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:22 vm05 bash[22470]: cluster 2026-03-10T11:46:22.329824+0000 mgr.y (mgr.24970) 76 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:23.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:22 vm05 bash[17453]: cluster 2026-03-10T11:46:22.329824+0000 mgr.y (mgr.24970) 76 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:23.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:22 vm07 bash[17804]: cluster 2026-03-10T11:46:22.329824+0000 mgr.y (mgr.24970) 76 : cluster [DBG] pgmap v29: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:22.961653+0000 mon.a (mon.0) 1154 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:22.967146+0000 mon.a (mon.0) 1155 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:22.969046+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:22.969966+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:22.974738+0000 mon.a (mon.0) 1156 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:23.021358+0000 mon.c (mon.1) 160 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:23.022865+0000 mon.c (mon.1) 161 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:23.023784+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:23 vm05 bash[22470]: audit 2026-03-10T11:46:23.029465+0000 mon.a (mon.0) 1157 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:22.961653+0000 mon.a (mon.0) 1154 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:22.967146+0000 mon.a (mon.0) 1155 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:22.969046+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:22.969966+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:22.974738+0000 mon.a (mon.0) 1156 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:23.021358+0000 mon.c (mon.1) 160 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:23.022865+0000 mon.c (mon.1) 161 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:23.023784+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:23 vm05 bash[17453]: audit 2026-03-10T11:46:23.029465+0000 mon.a (mon.0) 1157 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:22.961653+0000 mon.a (mon.0) 1154 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:22.967146+0000 mon.a (mon.0) 1155 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:22.969046+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:22.969966+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:22.974738+0000 mon.a (mon.0) 1156 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:23.021358+0000 mon.c (mon.1) 160 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:23.022865+0000 mon.c (mon.1) 161 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:23.023784+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:24.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:23 vm07 bash[17804]: audit 2026-03-10T11:46:23.029465+0000 mon.a (mon.0) 1157 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:24 vm05 bash[22470]: cluster 2026-03-10T11:46:24.330147+0000 mgr.y (mgr.24970) 77 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:25.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:24 vm05 bash[17453]: cluster 2026-03-10T11:46:24.330147+0000 mgr.y (mgr.24970) 77 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:25.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:24 vm07 bash[17804]: cluster 2026-03-10T11:46:24.330147+0000 mgr.y (mgr.24970) 77 : cluster [DBG] pgmap v30: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:26.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:26 vm05 bash[22470]: cluster 2026-03-10T11:46:26.330606+0000 mgr.y (mgr.24970) 78 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:26.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:26 vm05 bash[17453]: cluster 2026-03-10T11:46:26.330606+0000 mgr.y (mgr.24970) 78 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:26.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:26 vm07 bash[17804]: cluster 2026-03-10T11:46:26.330606+0000 mgr.y (mgr.24970) 78 : cluster [DBG] pgmap v31: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:29.114 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:46:28] "GET /metrics HTTP/1.1" 200 37549 "" "Prometheus/2.51.0" 2026-03-10T11:46:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:29 vm07 bash[17804]: cluster 2026-03-10T11:46:28.331331+0000 mgr.y (mgr.24970) 79 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:29.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:29 vm05 bash[22470]: cluster 2026-03-10T11:46:28.331331+0000 mgr.y (mgr.24970) 79 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:29.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:29 vm05 bash[17453]: cluster 2026-03-10T11:46:28.331331+0000 mgr.y (mgr.24970) 79 : cluster [DBG] pgmap v32: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:30.481 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | length == 1'"'"'' 2026-03-10T11:46:30.488 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:30 vm05 bash[22470]: audit 2026-03-10T11:46:28.939952+0000 mgr.y (mgr.24970) 80 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:30.488 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:30 vm05 bash[17453]: audit 2026-03-10T11:46:28.939952+0000 mgr.y (mgr.24970) 80 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:30.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:30 vm07 bash[17804]: audit 2026-03-10T11:46:28.939952+0000 mgr.y (mgr.24970) 80 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:31.050 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:46:31.094 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mgr | keys'"'"' | grep $sha1' 2026-03-10T11:46:31.318 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:31 vm05 bash[22470]: audit 2026-03-10T11:46:30.241147+0000 mgr.y (mgr.24970) 81 : audit [DBG] from='client.15240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:31.318 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:31 vm05 bash[22470]: cluster 2026-03-10T11:46:30.331744+0000 mgr.y (mgr.24970) 82 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:31.318 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:31 vm05 bash[22470]: audit 2026-03-10T11:46:30.806204+0000 mon.c (mon.1) 163 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:31.318 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:31 vm05 bash[22470]: audit 2026-03-10T11:46:31.039700+0000 mon.b (mon.2) 278 : audit [DBG] from='client.? 192.168.123.105:0/2908553613' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:31.318 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:31 vm05 bash[17453]: audit 2026-03-10T11:46:30.241147+0000 mgr.y (mgr.24970) 81 : audit [DBG] from='client.15240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:31.318 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:31 vm05 bash[17453]: cluster 2026-03-10T11:46:30.331744+0000 mgr.y (mgr.24970) 82 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:31.319 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:31 vm05 bash[17453]: audit 2026-03-10T11:46:30.806204+0000 mon.c (mon.1) 163 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:31.319 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:31 vm05 bash[17453]: audit 2026-03-10T11:46:31.039700+0000 mon.b (mon.2) 278 : audit [DBG] from='client.? 192.168.123.105:0/2908553613' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:31.671 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T11:46:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:31 vm07 bash[17804]: audit 2026-03-10T11:46:30.241147+0000 mgr.y (mgr.24970) 81 : audit [DBG] from='client.15240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:31 vm07 bash[17804]: cluster 2026-03-10T11:46:30.331744+0000 mgr.y (mgr.24970) 82 : cluster [DBG] pgmap v33: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:31 vm07 bash[17804]: audit 2026-03-10T11:46:30.806204+0000 mon.c (mon.1) 163 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:31.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:31 vm07 bash[17804]: audit 2026-03-10T11:46:31.039700+0000 mon.b (mon.2) 278 : audit [DBG] from='client.? 192.168.123.105:0/2908553613' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:31.714 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 2'"'"'' 2026-03-10T11:46:32.219 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:46:32.265 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 2'"'"'' 2026-03-10T11:46:32.488 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:32 vm05 bash[17453]: audit 2026-03-10T11:46:31.661311+0000 mon.b (mon.2) 279 : audit [DBG] from='client.? 192.168.123.105:0/104307717' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:32.488 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:32 vm05 bash[17453]: audit 2026-03-10T11:46:32.210565+0000 mon.a (mon.0) 1158 : audit [DBG] from='client.? 192.168.123.105:0/1318709150' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:32.488 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:32 vm05 bash[22470]: audit 2026-03-10T11:46:31.661311+0000 mon.b (mon.2) 279 : audit [DBG] from='client.? 192.168.123.105:0/104307717' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:32.489 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:32 vm05 bash[22470]: audit 2026-03-10T11:46:32.210565+0000 mon.a (mon.0) 1158 : audit [DBG] from='client.? 192.168.123.105:0/1318709150' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:32.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:32 vm07 bash[17804]: audit 2026-03-10T11:46:31.661311+0000 mon.b (mon.2) 279 : audit [DBG] from='client.? 192.168.123.105:0/104307717' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:32.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:32 vm07 bash[17804]: audit 2026-03-10T11:46:32.210565+0000 mon.a (mon.0) 1158 : audit [DBG] from='client.? 192.168.123.105:0/1318709150' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:33 vm05 bash[22470]: cluster 2026-03-10T11:46:32.332334+0000 mgr.y (mgr.24970) 83 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:33.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:33 vm05 bash[17453]: cluster 2026-03-10T11:46:32.332334+0000 mgr.y (mgr.24970) 83 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:33.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:33 vm07 bash[17804]: cluster 2026-03-10T11:46:32.332334+0000 mgr.y (mgr.24970) 83 : cluster [DBG] pgmap v34: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:34.262 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:46:34.312 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:46:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:34 vm05 bash[22470]: audit 2026-03-10T11:46:32.751213+0000 mgr.y (mgr.24970) 84 : audit [DBG] from='client.15261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:34 vm05 bash[17453]: audit 2026-03-10T11:46:32.751213+0000 mgr.y (mgr.24970) 84 : audit [DBG] from='client.15261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:34.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:34 vm07 bash[17804]: audit 2026-03-10T11:46:32.751213+0000 mgr.y (mgr.24970) 84 : audit [DBG] from='client.15261 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:46:34.795 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:46:34.854 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:46:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:35 vm05 bash[22470]: cluster 2026-03-10T11:46:34.332666+0000 mgr.y (mgr.24970) 85 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:35 vm05 bash[17453]: cluster 2026-03-10T11:46:34.332666+0000 mgr.y (mgr.24970) 85 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:35.382 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:46:35.435 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.x | awk '"'"'{print $2}'"'"')' 2026-03-10T11:46:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:35 vm07 bash[17804]: cluster 2026-03-10T11:46:34.332666+0000 mgr.y (mgr.24970) 85 : cluster [DBG] pgmap v35: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:36.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:36 vm05 bash[17453]: audit 2026-03-10T11:46:34.798278+0000 mgr.y (mgr.24970) 86 : audit [DBG] from='client.25177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:36.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:36 vm05 bash[17453]: audit 2026-03-10T11:46:35.386302+0000 mon.c (mon.1) 164 : audit [DBG] from='client.? 192.168.123.105:0/2094164050' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:46:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:36 vm05 bash[22470]: audit 2026-03-10T11:46:34.798278+0000 mgr.y (mgr.24970) 86 : audit [DBG] from='client.25177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:36 vm05 bash[22470]: audit 2026-03-10T11:46:35.386302+0000 mon.c (mon.1) 164 : audit [DBG] from='client.? 192.168.123.105:0/2094164050' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:46:36.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:36 vm07 bash[17804]: audit 2026-03-10T11:46:34.798278+0000 mgr.y (mgr.24970) 86 : audit [DBG] from='client.25177 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:36.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:36 vm07 bash[17804]: audit 2026-03-10T11:46:35.386302+0000 mon.c (mon.1) 164 : audit [DBG] from='client.? 192.168.123.105:0/2094164050' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:46:37.509 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:37.559 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:37 vm05 bash[17453]: audit 2026-03-10T11:46:35.880294+0000 mgr.y (mgr.24970) 87 : audit [DBG] from='client.15273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:37.559 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:37 vm05 bash[17453]: audit 2026-03-10T11:46:36.097523+0000 mgr.y (mgr.24970) 88 : audit [DBG] from='client.15279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:37.559 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:37 vm05 bash[17453]: cluster 2026-03-10T11:46:36.333032+0000 mgr.y (mgr.24970) 89 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:37.559 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:37 vm05 bash[22470]: audit 2026-03-10T11:46:35.880294+0000 mgr.y (mgr.24970) 87 : audit [DBG] from='client.15273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:37.559 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:37 vm05 bash[22470]: audit 2026-03-10T11:46:36.097523+0000 mgr.y (mgr.24970) 88 : audit [DBG] from='client.15279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:37.559 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:37 vm05 bash[22470]: cluster 2026-03-10T11:46:36.333032+0000 mgr.y (mgr.24970) 89 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:37.605 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:46:37.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:37 vm07 bash[17804]: audit 2026-03-10T11:46:35.880294+0000 mgr.y (mgr.24970) 87 : audit [DBG] from='client.15273 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:37.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:37 vm07 bash[17804]: audit 2026-03-10T11:46:36.097523+0000 mgr.y (mgr.24970) 88 : audit [DBG] from='client.15279 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm07", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:37.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:37 vm07 bash[17804]: cluster 2026-03-10T11:46:36.333032+0000 mgr.y (mgr.24970) 89 : cluster [DBG] pgmap v36: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:38.182 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (13m) 52s ago 20m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (34s) 15s ago 19m 66.7M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (60s) 52s ago 19m 41.3M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (57s) 15s ago 22m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (10m) 52s ago 23m 518M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (23m) 52s ago 23m 69.3M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (22m) 15s ago 22m 54.4M 2048M 17.2.0 e1d6a67b021e 824de3717020 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (22m) 52s ago 22m 51.8M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (13m) 52s ago 20m 7908k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (13m) 15s ago 20m 7648k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (22m) 52s ago 22m 53.0M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (22m) 52s ago 22m 55.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (21m) 52s ago 21m 51.6M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (21m) 52s ago 21m 54.6M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (21m) 15s ago 21m 54.6M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (21m) 15s ago 21m 51.2M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (20m) 15s ago 20m 50.0M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (20m) 15s ago 20m 52.7M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (59s) 15s ago 20m 40.1M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (19m) 52s ago 19m 87.3M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:46:38.624 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (19m) 15s ago 19m 88.2M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: cephadm 2026-03-10T11:46:37.501322+0000 mgr.y (mgr.24970) 90 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: audit 2026-03-10T11:46:37.508315+0000 mon.a (mon.0) 1159 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: audit 2026-03-10T11:46:37.510255+0000 mon.c (mon.1) 165 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: audit 2026-03-10T11:46:37.928912+0000 mon.c (mon.1) 166 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: audit 2026-03-10T11:46:37.932000+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: audit 2026-03-10T11:46:37.942602+0000 mon.a (mon.0) 1160 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: cephadm 2026-03-10T11:46:37.994463+0000 mgr.y (mgr.24970) 91 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: audit 2026-03-10T11:46:38.170738+0000 mgr.y (mgr.24970) 92 : audit [DBG] from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:38 vm05 bash[22470]: cluster 2026-03-10T11:46:38.333576+0000 mgr.y (mgr.24970) 93 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: cephadm 2026-03-10T11:46:37.501322+0000 mgr.y (mgr.24970) 90 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: audit 2026-03-10T11:46:37.508315+0000 mon.a (mon.0) 1159 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: audit 2026-03-10T11:46:37.510255+0000 mon.c (mon.1) 165 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: audit 2026-03-10T11:46:37.928912+0000 mon.c (mon.1) 166 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: audit 2026-03-10T11:46:37.932000+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: audit 2026-03-10T11:46:37.942602+0000 mon.a (mon.0) 1160 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: cephadm 2026-03-10T11:46:37.994463+0000 mgr.y (mgr.24970) 91 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: audit 2026-03-10T11:46:38.170738+0000 mgr.y (mgr.24970) 92 : audit [DBG] from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:38.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:38 vm05 bash[17453]: cluster 2026-03-10T11:46:38.333576+0000 mgr.y (mgr.24970) 93 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 13, 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:46:38.867 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: cephadm 2026-03-10T11:46:37.501322+0000 mgr.y (mgr.24970) 90 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: audit 2026-03-10T11:46:37.508315+0000 mon.a (mon.0) 1159 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: audit 2026-03-10T11:46:37.510255+0000 mon.c (mon.1) 165 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: audit 2026-03-10T11:46:37.928912+0000 mon.c (mon.1) 166 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: audit 2026-03-10T11:46:37.932000+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: audit 2026-03-10T11:46:37.942602+0000 mon.a (mon.0) 1160 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: cephadm 2026-03-10T11:46:37.994463+0000 mgr.y (mgr.24970) 91 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: audit 2026-03-10T11:46:38.170738+0000 mgr.y (mgr.24970) 92 : audit [DBG] from='client.15282 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:38 vm07 bash[17804]: cluster 2026-03-10T11:46:38.333576+0000 mgr.y (mgr.24970) 93 : cluster [DBG] pgmap v37: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mon on host(s) vm07", 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:46:39.077 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:46:39.078 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:46:39.078 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:46:39.257 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:46:38] "GET /metrics HTTP/1.1" 200 37546 "" "Prometheus/2.51.0" 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:38.403558+0000 mgr.y (mgr.24970) 94 : audit [DBG] from='client.25198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:38.614542+0000 mgr.y (mgr.24970) 95 : audit [DBG] from='client.15291 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:38.870407+0000 mon.a (mon.0) 1161 : audit [DBG] from='client.? 192.168.123.105:0/284470519' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:38.948053+0000 mgr.y (mgr.24970) 96 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.080745+0000 mgr.y (mgr.24970) 97 : audit [DBG] from='client.25207 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.440723+0000 mon.a (mon.0) 1162 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.443433+0000 mon.c (mon.1) 168 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.444961+0000 mon.c (mon.1) 169 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.451095+0000 mon.a (mon.0) 1163 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.452463+0000 mon.c (mon.1) 170 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:46:39.779 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:39 vm07 bash[17804]: audit 2026-03-10T11:46:39.453003+0000 mon.c (mon.1) 171 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T11:46:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:38.403558+0000 mgr.y (mgr.24970) 94 : audit [DBG] from='client.25198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:38.614542+0000 mgr.y (mgr.24970) 95 : audit [DBG] from='client.15291 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:38.870407+0000 mon.a (mon.0) 1161 : audit [DBG] from='client.? 192.168.123.105:0/284470519' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:38.948053+0000 mgr.y (mgr.24970) 96 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.080745+0000 mgr.y (mgr.24970) 97 : audit [DBG] from='client.25207 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.440723+0000 mon.a (mon.0) 1162 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.443433+0000 mon.c (mon.1) 168 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.444961+0000 mon.c (mon.1) 169 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.451095+0000 mon.a (mon.0) 1163 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.452463+0000 mon.c (mon.1) 170 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:39 vm05 bash[17453]: audit 2026-03-10T11:46:39.453003+0000 mon.c (mon.1) 171 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:38.403558+0000 mgr.y (mgr.24970) 94 : audit [DBG] from='client.25198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:38.614542+0000 mgr.y (mgr.24970) 95 : audit [DBG] from='client.15291 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:38.870407+0000 mon.a (mon.0) 1161 : audit [DBG] from='client.? 192.168.123.105:0/284470519' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:38.948053+0000 mgr.y (mgr.24970) 96 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.080745+0000 mgr.y (mgr.24970) 97 : audit [DBG] from='client.25207 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.440723+0000 mon.a (mon.0) 1162 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.443433+0000 mon.c (mon.1) 168 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.444961+0000 mon.c (mon.1) 169 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.451095+0000 mon.a (mon.0) 1163 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.452463+0000 mon.c (mon.1) 170 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:46:39.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:39 vm05 bash[22470]: audit 2026-03-10T11:46:39.453003+0000 mon.c (mon.1) 171 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["b"]}]: dispatch 2026-03-10T11:46:40.533 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.533 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.533 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.533 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.533 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.533 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.533 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.534 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.534 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:40.815 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: Stopping Ceph mon.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:46:40.815 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 bash[17804]: debug 2026-03-10T11:46:40.567+0000 7f693b3a4700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:46:40.815 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 bash[17804]: debug 2026-03-10T11:46:40.567+0000 7f693b3a4700 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-10T11:46:40.815 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 bash[46046]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon-b 2026-03-10T11:46:40.815 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.b.service: Deactivated successfully. 2026-03-10T11:46:40.815 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: Stopped Ceph mon.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:40 vm07 systemd[1]: Started Ceph mon.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.019+0000 7f825e153d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.019+0000 7f825e153d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.019+0000 7f825e153d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.019+0000 7f825e153d80 0 load: jerasure load: lrc 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Git sha 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: DB SUMMARY 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: DB Session ID: IAZ0E0VJO9EAFWRTOQ7L 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 2048 Bytes 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 1, files: 000042.sst 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000040.log size: 1250857 ; 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.env: 0x558dac026dc0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.info_log: 0x558dd13517e0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.db_log_dir: 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.wal_dir: 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.write_buffer_manager: 0x558dd1355900 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:46:41.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.row_cache: None 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.wal_filter: None 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Compression algorithms supported: 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kZSTD supported: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T11:46:41.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.merge_operator: 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558dd1350320) 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: cache_index_and_filter_blocks: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: pin_top_level_index_and_filter: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: index_type: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: data_block_index_type: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: index_shortening: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: checksum: 4 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: no_block_cache: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_cache: 0x558dd1377350 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_cache_name: BinnedLRUCache 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_cache_options: 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: capacity : 536870912 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: num_shard_bits : 4 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: strict_capacity_limit : 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: high_pri_pool_ratio: 0.000 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_cache_compressed: (nil) 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: persistent_cache: (nil) 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_size: 4096 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_size_deviation: 10 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_restart_interval: 16 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: index_block_restart_interval: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: metadata_block_size: 4096 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: partition_filters: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: use_delta_encoding: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: filter_policy: bloomfilter 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: whole_key_filtering: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: verify_compression: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: read_amp_bytes_per_bit: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: format_version: 5 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: enable_index_compression: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: block_align: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: max_auto_readahead_size: 262144 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: prepopulate_block_cache: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: initial_auto_readahead_size: 8192 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: num_file_reads_for_auto_readahead: 2 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.num_levels: 7 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:46:41.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:46:41.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 42.sst 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 44, last_sequence is 23733, log_number is 40,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 40 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 871c9fb6-e4c8-43e6-882d-b55d69d40fe7 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143201027704, "job": 1, "event": "recovery_started", "wal_files": [40]} 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.023+0000 7f825e153d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #40 mode 2 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.027+0000 7f825e153d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143201033058, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 45, "file_size": 763000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23738, "largest_seqno": 24568, "table_properties": {"data_size": 759324, "index_size": 1745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 8938, "raw_average_key_size": 25, "raw_value_size": 752018, "raw_average_value_size": 2142, "num_data_blocks": 79, "num_entries": 351, "num_filter_entries": 351, "num_deletions": 7, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773143201, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "871c9fb6-e4c8-43e6-882d-b55d69d40fe7", "db_session_id": "IAZ0E0VJO9EAFWRTOQ7L", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.027+0000 7f825e153d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143201033132, "job": 1, "event": "recovery_finished"} 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.027+0000 7f825e153d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 47 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.031+0000 7f825e153d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.035+0000 7f825e153d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.035+0000 7f825e153d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558dd1378e00 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.035+0000 7f825e153d80 4 rocksdb: DB pointer 0x558dd1484000 2026-03-10T11:46:41.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:41 vm07 bash[46158]: debug 2026-03-10T11:46:41.035+0000 7f825e153d80 0 starting mon.b rank 2 at public addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] at bind addrs [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.119713+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.119713+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.123372+0000 mon.a (mon.0) 1165 : cluster [INF] mon.a calling monitor election 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.123372+0000 mon.a (mon.0) 1165 : cluster [INF] mon.a calling monitor election 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.127107+0000 mon.a (mon.0) 1166 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.127107+0000 mon.a (mon.0) 1166 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.133698+0000 mon.a (mon.0) 1167 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.133698+0000 mon.a (mon.0) 1167 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.133806+0000 mon.a (mon.0) 1168 : cluster [DBG] fsmap 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.133806+0000 mon.a (mon.0) 1168 : cluster [DBG] fsmap 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.133888+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.133888+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.134583+0000 mon.a (mon.0) 1170 : cluster [DBG] mgrmap e41: y(active, since 71s), standbys: x 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.134583+0000 mon.a (mon.0) 1170 : cluster [DBG] mgrmap e41: y(active, since 71s), standbys: x 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.141823+0000 mon.a (mon.0) 1171 : cluster [INF] overall HEALTH_OK 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: cluster 2026-03-10T11:46:41.141823+0000 mon.a (mon.0) 1171 : cluster [INF] overall HEALTH_OK 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: audit 2026-03-10T11:46:41.146422+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: audit 2026-03-10T11:46:41.146422+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: audit 2026-03-10T11:46:41.153865+0000 mon.a (mon.0) 1173 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: audit 2026-03-10T11:46:41.153865+0000 mon.a (mon.0) 1173 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: audit 2026-03-10T11:46:41.157078+0000 mon.c (mon.1) 175 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:42.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:42 vm07 bash[46158]: audit 2026-03-10T11:46:41.157078+0000 mon.c (mon.1) 175 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.119713+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.123372+0000 mon.a (mon.0) 1165 : cluster [INF] mon.a calling monitor election 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.127107+0000 mon.a (mon.0) 1166 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.133698+0000 mon.a (mon.0) 1167 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.133806+0000 mon.a (mon.0) 1168 : cluster [DBG] fsmap 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.133888+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.134583+0000 mon.a (mon.0) 1170 : cluster [DBG] mgrmap e41: y(active, since 71s), standbys: x 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: cluster 2026-03-10T11:46:41.141823+0000 mon.a (mon.0) 1171 : cluster [INF] overall HEALTH_OK 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: audit 2026-03-10T11:46:41.146422+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: audit 2026-03-10T11:46:41.153865+0000 mon.a (mon.0) 1173 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:42 vm05 bash[17453]: audit 2026-03-10T11:46:41.157078+0000 mon.c (mon.1) 175 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.119713+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.123372+0000 mon.a (mon.0) 1165 : cluster [INF] mon.a calling monitor election 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.127107+0000 mon.a (mon.0) 1166 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.133698+0000 mon.a (mon.0) 1167 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:46:42.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.133806+0000 mon.a (mon.0) 1168 : cluster [DBG] fsmap 2026-03-10T11:46:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.133888+0000 mon.a (mon.0) 1169 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:46:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.134583+0000 mon.a (mon.0) 1170 : cluster [DBG] mgrmap e41: y(active, since 71s), standbys: x 2026-03-10T11:46:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: cluster 2026-03-10T11:46:41.141823+0000 mon.a (mon.0) 1171 : cluster [INF] overall HEALTH_OK 2026-03-10T11:46:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: audit 2026-03-10T11:46:41.146422+0000 mon.a (mon.0) 1172 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: audit 2026-03-10T11:46:41.153865+0000 mon.a (mon.0) 1173 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:42.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:42 vm05 bash[22470]: audit 2026-03-10T11:46:41.157078+0000 mon.c (mon.1) 175 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:43 vm07 bash[46158]: cluster 2026-03-10T11:46:42.334444+0000 mgr.y (mgr.24970) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:43.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:43 vm07 bash[46158]: cluster 2026-03-10T11:46:42.334444+0000 mgr.y (mgr.24970) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:43.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:43 vm05 bash[17453]: cluster 2026-03-10T11:46:42.334444+0000 mgr.y (mgr.24970) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:43.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:43 vm05 bash[22470]: cluster 2026-03-10T11:46:42.334444+0000 mgr.y (mgr.24970) 105 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:44.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:44 vm07 bash[46158]: cluster 2026-03-10T11:46:44.334813+0000 mgr.y (mgr.24970) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:44.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:44 vm07 bash[46158]: cluster 2026-03-10T11:46:44.334813+0000 mgr.y (mgr.24970) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:44.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:44 vm05 bash[22470]: cluster 2026-03-10T11:46:44.334813+0000 mgr.y (mgr.24970) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:44.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:44 vm05 bash[17453]: cluster 2026-03-10T11:46:44.334813+0000 mgr.y (mgr.24970) 106 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:46.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:45 vm07 bash[46158]: audit 2026-03-10T11:46:45.806874+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:46.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:45 vm07 bash[46158]: audit 2026-03-10T11:46:45.806874+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:46.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:45 vm05 bash[22470]: audit 2026-03-10T11:46:45.806874+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:46.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:45 vm05 bash[17453]: audit 2026-03-10T11:46:45.806874+0000 mon.c (mon.1) 176 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:46:46.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:46 vm05 bash[53899]: debug 2026-03-10T11:46:46.044+0000 7f255e962640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-10T11:46:47.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:47 vm05 bash[17453]: cluster 2026-03-10T11:46:46.335174+0000 mgr.y (mgr.24970) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:47.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:47 vm05 bash[17453]: audit 2026-03-10T11:46:46.596036+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:47 vm05 bash[17453]: audit 2026-03-10T11:46:46.601797+0000 mon.a (mon.0) 1175 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:47 vm05 bash[22470]: cluster 2026-03-10T11:46:46.335174+0000 mgr.y (mgr.24970) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:47.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:47 vm05 bash[22470]: audit 2026-03-10T11:46:46.596036+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:47 vm05 bash[22470]: audit 2026-03-10T11:46:46.601797+0000 mon.a (mon.0) 1175 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:47 vm07 bash[46158]: cluster 2026-03-10T11:46:46.335174+0000 mgr.y (mgr.24970) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:47 vm07 bash[46158]: cluster 2026-03-10T11:46:46.335174+0000 mgr.y (mgr.24970) 107 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:47 vm07 bash[46158]: audit 2026-03-10T11:46:46.596036+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:47 vm07 bash[46158]: audit 2026-03-10T11:46:46.596036+0000 mon.a (mon.0) 1174 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:47 vm07 bash[46158]: audit 2026-03-10T11:46:46.601797+0000 mon.a (mon.0) 1175 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:47.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:47 vm07 bash[46158]: audit 2026-03-10T11:46:46.601797+0000 mon.a (mon.0) 1175 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:48 vm05 bash[22470]: audit 2026-03-10T11:46:47.235068+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:48 vm05 bash[22470]: audit 2026-03-10T11:46:47.245928+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:48 vm05 bash[17453]: audit 2026-03-10T11:46:47.235068+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:48 vm05 bash[17453]: audit 2026-03-10T11:46:47.245928+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:48 vm07 bash[46158]: audit 2026-03-10T11:46:47.235068+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:48 vm07 bash[46158]: audit 2026-03-10T11:46:47.235068+0000 mon.a (mon.0) 1176 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:48 vm07 bash[46158]: audit 2026-03-10T11:46:47.245928+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:48.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:48 vm07 bash[46158]: audit 2026-03-10T11:46:47.245928+0000 mon.a (mon.0) 1177 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:49.239 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:46:48] "GET /metrics HTTP/1.1" 200 37490 "" "Prometheus/2.51.0" 2026-03-10T11:46:49.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:49 vm05 bash[22470]: cluster 2026-03-10T11:46:48.335670+0000 mgr.y (mgr.24970) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:49.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:49 vm05 bash[17453]: cluster 2026-03-10T11:46:48.335670+0000 mgr.y (mgr.24970) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:49.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:49 vm07 bash[46158]: cluster 2026-03-10T11:46:48.335670+0000 mgr.y (mgr.24970) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:49.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:49 vm07 bash[46158]: cluster 2026-03-10T11:46:48.335670+0000 mgr.y (mgr.24970) 108 : cluster [DBG] pgmap v42: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:50 vm05 bash[22470]: audit 2026-03-10T11:46:48.958760+0000 mgr.y (mgr.24970) 109 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:50 vm05 bash[17453]: audit 2026-03-10T11:46:48.958760+0000 mgr.y (mgr.24970) 109 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:50.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:50 vm07 bash[46158]: audit 2026-03-10T11:46:48.958760+0000 mgr.y (mgr.24970) 109 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:50.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:50 vm07 bash[46158]: audit 2026-03-10T11:46:48.958760+0000 mgr.y (mgr.24970) 109 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:46:51.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:51 vm05 bash[22470]: cluster 2026-03-10T11:46:50.335997+0000 mgr.y (mgr.24970) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:51.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:51 vm05 bash[17453]: cluster 2026-03-10T11:46:50.335997+0000 mgr.y (mgr.24970) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:51.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:51 vm07 bash[46158]: cluster 2026-03-10T11:46:50.335997+0000 mgr.y (mgr.24970) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:51.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:51 vm07 bash[46158]: cluster 2026-03-10T11:46:50.335997+0000 mgr.y (mgr.24970) 110 : cluster [DBG] pgmap v43: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:52.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:52 vm07 bash[46158]: cluster 2026-03-10T11:46:52.336688+0000 mgr.y (mgr.24970) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:52.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:52 vm07 bash[46158]: cluster 2026-03-10T11:46:52.336688+0000 mgr.y (mgr.24970) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:52.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:52 vm05 bash[22470]: cluster 2026-03-10T11:46:52.336688+0000 mgr.y (mgr.24970) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:52.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:52 vm05 bash[17453]: cluster 2026-03-10T11:46:52.336688+0000 mgr.y (mgr.24970) 111 : cluster [DBG] pgmap v44: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.893562+0000 mgr.y (mgr.24970) 112 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.893562+0000 mgr.y (mgr.24970) 112 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.900161+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.900161+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.905055+0000 mon.a (mon.0) 1179 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.905055+0000 mon.a (mon.0) 1179 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.905798+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.905798+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.906271+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.906271+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.910472+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.910472+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.950522+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.950522+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.951712+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.951712+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.952676+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.952676+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.953158+0000 mgr.y (mgr.24970) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.953158+0000 mgr.y (mgr.24970) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.957586+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.957586+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.960714+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.960714+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.961170+0000 mgr.y (mgr.24970) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.961170+0000 mgr.y (mgr.24970) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.965369+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.965369+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.969893+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.969893+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.970580+0000 mgr.y (mgr.24970) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.970580+0000 mgr.y (mgr.24970) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.976551+0000 mon.a (mon.0) 1183 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.976551+0000 mon.a (mon.0) 1183 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.978563+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.978563+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.979268+0000 mgr.y (mgr.24970) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:46:54.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.979268+0000 mgr.y (mgr.24970) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.982468+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.982468+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.985173+0000 mon.c (mon.1) 185 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.985173+0000 mon.c (mon.1) 185 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.985372+0000 mon.a (mon.0) 1185 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.985372+0000 mon.a (mon.0) 1185 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.988214+0000 mon.a (mon.0) 1186 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.988214+0000 mon.a (mon.0) 1186 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.991084+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.991084+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.991720+0000 mgr.y (mgr.24970) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:52.991720+0000 mgr.y (mgr.24970) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.994905+0000 mon.a (mon.0) 1187 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.994905+0000 mon.a (mon.0) 1187 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.999738+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:52.999738+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.000541+0000 mgr.y (mgr.24970) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.000541+0000 mgr.y (mgr.24970) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.003999+0000 mon.a (mon.0) 1188 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.003999+0000 mon.a (mon.0) 1188 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.008607+0000 mon.c (mon.1) 188 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.008607+0000 mon.c (mon.1) 188 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.009350+0000 mgr.y (mgr.24970) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.009350+0000 mgr.y (mgr.24970) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.012753+0000 mon.a (mon.0) 1189 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.012753+0000 mon.a (mon.0) 1189 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.016948+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.016948+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.017628+0000 mgr.y (mgr.24970) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.017628+0000 mgr.y (mgr.24970) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.018694+0000 mon.c (mon.1) 190 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.018694+0000 mon.c (mon.1) 190 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.018893+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.018893+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.019988+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.019988+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.020639+0000 mgr.y (mgr.24970) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.020639+0000 mgr.y (mgr.24970) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.021693+0000 mon.c (mon.1) 192 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.021693+0000 mon.c (mon.1) 192 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.021911+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.021911+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.023001+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.023001+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.023631+0000 mgr.y (mgr.24970) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.023631+0000 mgr.y (mgr.24970) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:46:54.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.024675+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.024675+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.024871+0000 mon.a (mon.0) 1192 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.024871+0000 mon.a (mon.0) 1192 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.026100+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.026100+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.026775+0000 mgr.y (mgr.24970) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.026775+0000 mgr.y (mgr.24970) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.027883+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.027883+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.028082+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.028082+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.029119+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.029119+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.029760+0000 mgr.y (mgr.24970) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.029760+0000 mgr.y (mgr.24970) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.030812+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.030812+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.031003+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.031003+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.032065+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.032065+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.032706+0000 mgr.y (mgr.24970) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.032706+0000 mgr.y (mgr.24970) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.033831+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.033831+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.034055+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.034055+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.034750+0000 mgr.y (mgr.24970) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.034750+0000 mgr.y (mgr.24970) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.035778+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.035778+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.035996+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.035996+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.039079+0000 mon.a (mon.0) 1197 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.039079+0000 mon.a (mon.0) 1197 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.041980+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.041980+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.042189+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.042189+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.045297+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.045297+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.047787+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.047787+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.047982+0000 mon.a (mon.0) 1200 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.047982+0000 mon.a (mon.0) 1200 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.050891+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.050891+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.053573+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.053573+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.053784+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.053784+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.054693+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.054693+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.054896+0000 mon.a (mon.0) 1203 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.054896+0000 mon.a (mon.0) 1203 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.058026+0000 mon.a (mon.0) 1204 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.058026+0000 mon.a (mon.0) 1204 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.060611+0000 mon.c (mon.1) 206 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.060611+0000 mon.c (mon.1) 206 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.060827+0000 mon.a (mon.0) 1205 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.060827+0000 mon.a (mon.0) 1205 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.061636+0000 mon.c (mon.1) 207 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.061636+0000 mon.c (mon.1) 207 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.061828+0000 mon.a (mon.0) 1206 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.061828+0000 mon.a (mon.0) 1206 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.064684+0000 mon.a (mon.0) 1207 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.064684+0000 mon.a (mon.0) 1207 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.067802+0000 mon.c (mon.1) 208 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.067802+0000 mon.c (mon.1) 208 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.068070+0000 mon.a (mon.0) 1208 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.068070+0000 mon.a (mon.0) 1208 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.069093+0000 mon.c (mon.1) 209 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.069093+0000 mon.c (mon.1) 209 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.069314+0000 mon.a (mon.0) 1209 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.069314+0000 mon.a (mon.0) 1209 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.072855+0000 mon.a (mon.0) 1210 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.072855+0000 mon.a (mon.0) 1210 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.076567+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.076567+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.076813+0000 mon.a (mon.0) 1211 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.076813+0000 mon.a (mon.0) 1211 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.077393+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.077393+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.077644+0000 mon.a (mon.0) 1212 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.077644+0000 mon.a (mon.0) 1212 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.083334+0000 mon.a (mon.0) 1213 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.083334+0000 mon.a (mon.0) 1213 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.084053+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.084053+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.084289+0000 mon.a (mon.0) 1214 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.084289+0000 mon.a (mon.0) 1214 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.089838+0000 mon.a (mon.0) 1215 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.089838+0000 mon.a (mon.0) 1215 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.090715+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.090715+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.090961+0000 mon.a (mon.0) 1216 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.090961+0000 mon.a (mon.0) 1216 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.091652+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.091652+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.091888+0000 mon.a (mon.0) 1217 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.091888+0000 mon.a (mon.0) 1217 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.092479+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.092479+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.092710+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.092710+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.093248+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.093248+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.093502+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.093502+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.094038+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.094038+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.094292+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.094292+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.094898+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.094898+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.095132+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.095132+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.095568+0000 mgr.y (mgr.24970) 127 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: cephadm 2026-03-10T11:46:53.095568+0000 mgr.y (mgr.24970) 127 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.095810+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.095810+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.096035+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.096035+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.100026+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.100026+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.100477+0000 mon.c (mon.1) 220 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.100477+0000 mon.c (mon.1) 220 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.101492+0000 mon.c (mon.1) 221 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.101492+0000 mon.c (mon.1) 221 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.101920+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.101920+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.106771+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.106771+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.146864+0000 mon.c (mon.1) 223 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.146864+0000 mon.c (mon.1) 223 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.148060+0000 mon.c (mon.1) 224 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.148060+0000 mon.c (mon.1) 224 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.148966+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.148966+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.166547+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:53 vm07 bash[46158]: audit 2026-03-10T11:46:53.166547+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:52.893562+0000 mgr.y (mgr.24970) 112 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.900161+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.905055+0000 mon.a (mon.0) 1179 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.905798+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.906271+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.910472+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.950522+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.951712+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.952676+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:52.953158+0000 mgr.y (mgr.24970) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.957586+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.960714+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:52.961170+0000 mgr.y (mgr.24970) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.965369+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.969893+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:52.970580+0000 mgr.y (mgr.24970) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.976551+0000 mon.a (mon.0) 1183 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.978563+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:52.979268+0000 mgr.y (mgr.24970) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.982468+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.985173+0000 mon.c (mon.1) 185 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.985372+0000 mon.a (mon.0) 1185 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.988214+0000 mon.a (mon.0) 1186 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.991084+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:52.991720+0000 mgr.y (mgr.24970) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.994905+0000 mon.a (mon.0) 1187 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:52.999738+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.000541+0000 mgr.y (mgr.24970) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.003999+0000 mon.a (mon.0) 1188 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.008607+0000 mon.c (mon.1) 188 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.009350+0000 mgr.y (mgr.24970) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.012753+0000 mon.a (mon.0) 1189 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.016948+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.017628+0000 mgr.y (mgr.24970) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.018694+0000 mon.c (mon.1) 190 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.018893+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.019988+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.020639+0000 mgr.y (mgr.24970) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.021693+0000 mon.c (mon.1) 192 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.021911+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.023001+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.023631+0000 mgr.y (mgr.24970) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.024675+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.024871+0000 mon.a (mon.0) 1192 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.026100+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.026775+0000 mgr.y (mgr.24970) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.027883+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.028082+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.029119+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.029760+0000 mgr.y (mgr.24970) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.030812+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.031003+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.032065+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.032706+0000 mgr.y (mgr.24970) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.033831+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.034055+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.034750+0000 mgr.y (mgr.24970) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:54.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.035778+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.035996+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.039079+0000 mon.a (mon.0) 1197 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.041980+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.042189+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.045297+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.047787+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.047982+0000 mon.a (mon.0) 1200 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.050891+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.053573+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.053784+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.054693+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.054896+0000 mon.a (mon.0) 1203 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.058026+0000 mon.a (mon.0) 1204 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.060611+0000 mon.c (mon.1) 206 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.060827+0000 mon.a (mon.0) 1205 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.061636+0000 mon.c (mon.1) 207 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.061828+0000 mon.a (mon.0) 1206 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.064684+0000 mon.a (mon.0) 1207 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.067802+0000 mon.c (mon.1) 208 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.068070+0000 mon.a (mon.0) 1208 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.069093+0000 mon.c (mon.1) 209 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.069314+0000 mon.a (mon.0) 1209 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.072855+0000 mon.a (mon.0) 1210 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.076567+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.076813+0000 mon.a (mon.0) 1211 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.077393+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.077644+0000 mon.a (mon.0) 1212 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.083334+0000 mon.a (mon.0) 1213 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.084053+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.084289+0000 mon.a (mon.0) 1214 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.089838+0000 mon.a (mon.0) 1215 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.090715+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.090961+0000 mon.a (mon.0) 1216 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.091652+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.091888+0000 mon.a (mon.0) 1217 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.092479+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.092710+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.093248+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.093502+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.094038+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.094292+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.094898+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.095132+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: cephadm 2026-03-10T11:46:53.095568+0000 mgr.y (mgr.24970) 127 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:54.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.095810+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.096035+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.100026+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.100477+0000 mon.c (mon.1) 220 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.101492+0000 mon.c (mon.1) 221 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.101920+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.106771+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.146864+0000 mon.c (mon.1) 223 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.148060+0000 mon.c (mon.1) 224 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.148966+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:53 vm05 bash[22470]: audit 2026-03-10T11:46:53.166547+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:52.893562+0000 mgr.y (mgr.24970) 112 : cephadm [INF] Detected new or changed devices on vm07 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.900161+0000 mon.a (mon.0) 1178 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.905055+0000 mon.a (mon.0) 1179 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.905798+0000 mon.c (mon.1) 177 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.906271+0000 mon.c (mon.1) 178 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.910472+0000 mon.a (mon.0) 1180 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.950522+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.951712+0000 mon.c (mon.1) 180 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.952676+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:52.953158+0000 mgr.y (mgr.24970) 113 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.957586+0000 mon.a (mon.0) 1181 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.960714+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:52.961170+0000 mgr.y (mgr.24970) 114 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.965369+0000 mon.a (mon.0) 1182 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.969893+0000 mon.c (mon.1) 183 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:52.970580+0000 mgr.y (mgr.24970) 115 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.976551+0000 mon.a (mon.0) 1183 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.978563+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:52.979268+0000 mgr.y (mgr.24970) 116 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.982468+0000 mon.a (mon.0) 1184 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.985173+0000 mon.c (mon.1) 185 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.985372+0000 mon.a (mon.0) 1185 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.988214+0000 mon.a (mon.0) 1186 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]': finished 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.991084+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:52.991720+0000 mgr.y (mgr.24970) 117 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:46:54.345 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.994905+0000 mon.a (mon.0) 1187 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:52.999738+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.000541+0000 mgr.y (mgr.24970) 118 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.003999+0000 mon.a (mon.0) 1188 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.008607+0000 mon.c (mon.1) 188 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.009350+0000 mgr.y (mgr.24970) 119 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.012753+0000 mon.a (mon.0) 1189 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.016948+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.017628+0000 mgr.y (mgr.24970) 120 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.018694+0000 mon.c (mon.1) 190 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.018893+0000 mon.a (mon.0) 1190 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.019988+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.020639+0000 mgr.y (mgr.24970) 121 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.021693+0000 mon.c (mon.1) 192 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.021911+0000 mon.a (mon.0) 1191 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.023001+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.023631+0000 mgr.y (mgr.24970) 122 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.024675+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.024871+0000 mon.a (mon.0) 1192 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.026100+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.026775+0000 mgr.y (mgr.24970) 123 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.027883+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.028082+0000 mon.a (mon.0) 1193 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.029119+0000 mon.c (mon.1) 197 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.029760+0000 mgr.y (mgr.24970) 124 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.030812+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.031003+0000 mon.a (mon.0) 1194 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.032065+0000 mon.c (mon.1) 199 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.032706+0000 mgr.y (mgr.24970) 125 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.033831+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.034055+0000 mon.a (mon.0) 1195 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.b"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.034750+0000 mgr.y (mgr.24970) 126 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.035778+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.035996+0000 mon.a (mon.0) 1196 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.039079+0000 mon.a (mon.0) 1197 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.041980+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.042189+0000 mon.a (mon.0) 1198 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.045297+0000 mon.a (mon.0) 1199 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.047787+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.047982+0000 mon.a (mon.0) 1200 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.050891+0000 mon.a (mon.0) 1201 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.053573+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.053784+0000 mon.a (mon.0) 1202 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.054693+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.054896+0000 mon.a (mon.0) 1203 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.058026+0000 mon.a (mon.0) 1204 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.060611+0000 mon.c (mon.1) 206 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.060827+0000 mon.a (mon.0) 1205 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.061636+0000 mon.c (mon.1) 207 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.061828+0000 mon.a (mon.0) 1206 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.064684+0000 mon.a (mon.0) 1207 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.067802+0000 mon.c (mon.1) 208 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.068070+0000 mon.a (mon.0) 1208 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.069093+0000 mon.c (mon.1) 209 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.346 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.069314+0000 mon.a (mon.0) 1209 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.072855+0000 mon.a (mon.0) 1210 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.076567+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.076813+0000 mon.a (mon.0) 1211 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.077393+0000 mon.c (mon.1) 211 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.077644+0000 mon.a (mon.0) 1212 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.083334+0000 mon.a (mon.0) 1213 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.084053+0000 mon.c (mon.1) 212 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.084289+0000 mon.a (mon.0) 1214 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.089838+0000 mon.a (mon.0) 1215 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.090715+0000 mon.c (mon.1) 213 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.090961+0000 mon.a (mon.0) 1216 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.091652+0000 mon.c (mon.1) 214 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.091888+0000 mon.a (mon.0) 1217 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.092479+0000 mon.c (mon.1) 215 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.092710+0000 mon.a (mon.0) 1218 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.093248+0000 mon.c (mon.1) 216 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.093502+0000 mon.a (mon.0) 1219 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.094038+0000 mon.c (mon.1) 217 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.094292+0000 mon.a (mon.0) 1220 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.094898+0000 mon.c (mon.1) 218 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.095132+0000 mon.a (mon.0) 1221 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: cephadm 2026-03-10T11:46:53.095568+0000 mgr.y (mgr.24970) 127 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.095810+0000 mon.c (mon.1) 219 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.096035+0000 mon.a (mon.0) 1222 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.100026+0000 mon.a (mon.0) 1223 : audit [INF] from='mgr.24970 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.100477+0000 mon.c (mon.1) 220 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.101492+0000 mon.c (mon.1) 221 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.101920+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.106771+0000 mon.a (mon.0) 1224 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.146864+0000 mon.c (mon.1) 223 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.148060+0000 mon.c (mon.1) 224 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.148966+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:46:54.347 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:53 vm05 bash[17453]: audit 2026-03-10T11:46:53.166547+0000 mon.a (mon.0) 1225 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:54 vm07 bash[46158]: cluster 2026-03-10T11:46:54.337011+0000 mgr.y (mgr.24970) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:55.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:54 vm07 bash[46158]: cluster 2026-03-10T11:46:54.337011+0000 mgr.y (mgr.24970) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:55.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:54 vm05 bash[22470]: cluster 2026-03-10T11:46:54.337011+0000 mgr.y (mgr.24970) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:55.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:54 vm05 bash[17453]: cluster 2026-03-10T11:46:54.337011+0000 mgr.y (mgr.24970) 128 : cluster [DBG] pgmap v45: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:57.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:56 vm05 bash[22470]: audit 2026-03-10T11:46:55.836189+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:57.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:56 vm05 bash[22470]: cluster 2026-03-10T11:46:56.337344+0000 mgr.y (mgr.24970) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:57.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:56 vm05 bash[17453]: audit 2026-03-10T11:46:55.836189+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:57.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:56 vm05 bash[17453]: cluster 2026-03-10T11:46:56.337344+0000 mgr.y (mgr.24970) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:57.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:56 vm07 bash[46158]: audit 2026-03-10T11:46:55.836189+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:57.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:56 vm07 bash[46158]: audit 2026-03-10T11:46:55.836189+0000 mon.a (mon.0) 1226 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:46:57.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:56 vm07 bash[46158]: cluster 2026-03-10T11:46:56.337344+0000 mgr.y (mgr.24970) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:57.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:56 vm07 bash[46158]: cluster 2026-03-10T11:46:56.337344+0000 mgr.y (mgr.24970) 129 : cluster [DBG] pgmap v46: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:46:59.119 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:46:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:46:58] "GET /metrics HTTP/1.1" 200 37490 "" "Prometheus/2.51.0" 2026-03-10T11:46:59.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:59 vm07 bash[46158]: cluster 2026-03-10T11:46:58.337958+0000 mgr.y (mgr.24970) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:59.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:46:59 vm07 bash[46158]: cluster 2026-03-10T11:46:58.337958+0000 mgr.y (mgr.24970) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:59.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:46:59 vm05 bash[17453]: cluster 2026-03-10T11:46:58.337958+0000 mgr.y (mgr.24970) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:46:59.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:46:59 vm05 bash[22470]: cluster 2026-03-10T11:46:58.337958+0000 mgr.y (mgr.24970) 130 : cluster [DBG] pgmap v47: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:00.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:00 vm07 bash[46158]: audit 2026-03-10T11:46:58.967860+0000 mgr.y (mgr.24970) 131 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:00.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:00 vm07 bash[46158]: audit 2026-03-10T11:46:58.967860+0000 mgr.y (mgr.24970) 131 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:00.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:00 vm05 bash[17453]: audit 2026-03-10T11:46:58.967860+0000 mgr.y (mgr.24970) 131 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:00.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:00 vm05 bash[22470]: audit 2026-03-10T11:46:58.967860+0000 mgr.y (mgr.24970) 131 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:01 vm07 bash[46158]: cluster 2026-03-10T11:47:00.338255+0000 mgr.y (mgr.24970) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:01 vm07 bash[46158]: cluster 2026-03-10T11:47:00.338255+0000 mgr.y (mgr.24970) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:01 vm07 bash[46158]: audit 2026-03-10T11:47:00.806434+0000 mon.c (mon.1) 226 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:01.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:01 vm07 bash[46158]: audit 2026-03-10T11:47:00.806434+0000 mon.c (mon.1) 226 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:01.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:01 vm05 bash[17453]: cluster 2026-03-10T11:47:00.338255+0000 mgr.y (mgr.24970) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:01.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:01 vm05 bash[17453]: audit 2026-03-10T11:47:00.806434+0000 mon.c (mon.1) 226 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:01.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:01 vm05 bash[22470]: cluster 2026-03-10T11:47:00.338255+0000 mgr.y (mgr.24970) 132 : cluster [DBG] pgmap v48: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:01.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:01 vm05 bash[22470]: audit 2026-03-10T11:47:00.806434+0000 mon.c (mon.1) 226 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:02.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:02 vm07 bash[46158]: cluster 2026-03-10T11:47:02.338796+0000 mgr.y (mgr.24970) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:02.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:02 vm07 bash[46158]: cluster 2026-03-10T11:47:02.338796+0000 mgr.y (mgr.24970) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:02.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:02 vm05 bash[17453]: cluster 2026-03-10T11:47:02.338796+0000 mgr.y (mgr.24970) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:02.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:02 vm05 bash[22470]: cluster 2026-03-10T11:47:02.338796+0000 mgr.y (mgr.24970) 133 : cluster [DBG] pgmap v49: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:04.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:04 vm07 bash[46158]: cluster 2026-03-10T11:47:04.339125+0000 mgr.y (mgr.24970) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:04.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:04 vm07 bash[46158]: cluster 2026-03-10T11:47:04.339125+0000 mgr.y (mgr.24970) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:04.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:04 vm05 bash[17453]: cluster 2026-03-10T11:47:04.339125+0000 mgr.y (mgr.24970) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:04.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:04 vm05 bash[22470]: cluster 2026-03-10T11:47:04.339125+0000 mgr.y (mgr.24970) 134 : cluster [DBG] pgmap v50: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:06.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:06 vm07 bash[46158]: cluster 2026-03-10T11:47:06.339384+0000 mgr.y (mgr.24970) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:06.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:06 vm07 bash[46158]: cluster 2026-03-10T11:47:06.339384+0000 mgr.y (mgr.24970) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:06.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:06 vm05 bash[17453]: cluster 2026-03-10T11:47:06.339384+0000 mgr.y (mgr.24970) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:06.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:06 vm05 bash[22470]: cluster 2026-03-10T11:47:06.339384+0000 mgr.y (mgr.24970) 135 : cluster [DBG] pgmap v51: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:09.127 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:47:08] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-10T11:47:09.415 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:09 vm05 bash[17453]: cluster 2026-03-10T11:47:08.339883+0000 mgr.y (mgr.24970) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T11:47:09.415 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:09 vm05 bash[22470]: cluster 2026-03-10T11:47:08.339883+0000 mgr.y (mgr.24970) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T11:47:09.415 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:47:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:09 vm07 bash[46158]: cluster 2026-03-10T11:47:08.339883+0000 mgr.y (mgr.24970) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T11:47:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:09 vm07 bash[46158]: cluster 2026-03-10T11:47:08.339883+0000 mgr.y (mgr.24970) 136 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 0 B/s wr, 3 op/s 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (13m) 83s ago 20m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (65s) 23s ago 20m 67.8M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (91s) 83s ago 19m 41.3M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (88s) 23s ago 23m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (10m) 83s ago 24m 518M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (24m) 83s ago 24m 69.3M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (28s) 23s ago 23m 18.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (23m) 83s ago 23m 51.8M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (13m) 83s ago 20m 7908k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (13m) 23s ago 20m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (23m) 83s ago 23m 53.0M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (22m) 83s ago 22m 55.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (22m) 83s ago 22m 51.6M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (22m) 83s ago 22m 54.6M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (21m) 23s ago 22m 54.9M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (21m) 23s ago 21m 51.2M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (21m) 23s ago 21m 50.0M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (21m) 23s ago 21m 52.7M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (90s) 23s ago 20m 40.2M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (20m) 83s ago 20m 87.3M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:47:09.853 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (20m) 23s ago 20m 88.5M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:47:09.907 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mon | length == 2'"'"'' 2026-03-10T11:47:10.137 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:10 vm05 bash[22470]: audit 2026-03-10T11:47:08.976000+0000 mgr.y (mgr.24970) 137 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:10.137 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:10 vm05 bash[22470]: audit 2026-03-10T11:47:09.319073+0000 mgr.y (mgr.24970) 138 : audit [DBG] from='client.34100 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:10.137 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:10 vm05 bash[17453]: audit 2026-03-10T11:47:08.976000+0000 mgr.y (mgr.24970) 137 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:10.137 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:10 vm05 bash[17453]: audit 2026-03-10T11:47:09.319073+0000 mgr.y (mgr.24970) 138 : audit [DBG] from='client.34100 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:10.413 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:47:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:10 vm07 bash[46158]: audit 2026-03-10T11:47:08.976000+0000 mgr.y (mgr.24970) 137 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:10 vm07 bash[46158]: audit 2026-03-10T11:47:08.976000+0000 mgr.y (mgr.24970) 137 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:10 vm07 bash[46158]: audit 2026-03-10T11:47:09.319073+0000 mgr.y (mgr.24970) 138 : audit [DBG] from='client.34100 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:10 vm07 bash[46158]: audit 2026-03-10T11:47:09.319073+0000 mgr.y (mgr.24970) 138 : audit [DBG] from='client.34100 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:10.464 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:47:10.904 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:47:10.992 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:47:11.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:11 vm05 bash[22470]: audit 2026-03-10T11:47:09.852745+0000 mgr.y (mgr.24970) 139 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:11.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:11 vm05 bash[22470]: cluster 2026-03-10T11:47:10.340271+0000 mgr.y (mgr.24970) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T11:47:11.263 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:11 vm05 bash[22470]: audit 2026-03-10T11:47:10.405684+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.105:0/2929079718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:11.263 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:11 vm05 bash[17453]: audit 2026-03-10T11:47:09.852745+0000 mgr.y (mgr.24970) 139 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:11.263 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:11 vm05 bash[17453]: cluster 2026-03-10T11:47:10.340271+0000 mgr.y (mgr.24970) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T11:47:11.263 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:11 vm05 bash[17453]: audit 2026-03-10T11:47:10.405684+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.105:0/2929079718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:11 vm07 bash[46158]: audit 2026-03-10T11:47:09.852745+0000 mgr.y (mgr.24970) 139 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:11 vm07 bash[46158]: audit 2026-03-10T11:47:09.852745+0000 mgr.y (mgr.24970) 139 : audit [DBG] from='client.25216 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:11 vm07 bash[46158]: cluster 2026-03-10T11:47:10.340271+0000 mgr.y (mgr.24970) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T11:47:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:11 vm07 bash[46158]: cluster 2026-03-10T11:47:10.340271+0000 mgr.y (mgr.24970) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 97 MiB used, 160 GiB / 160 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s 2026-03-10T11:47:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:11 vm07 bash[46158]: audit 2026-03-10T11:47:10.405684+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.105:0/2929079718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:11 vm07 bash[46158]: audit 2026-03-10T11:47:10.405684+0000 mon.a (mon.0) 1227 : audit [DBG] from='client.? 192.168.123.105:0/2929079718' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:11.549 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:47:11.612 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types mon --hosts $(ceph orch ps | grep mgr.y | awk '"'"'{print $2}'"'"')' 2026-03-10T11:47:12.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:12 vm07 bash[46158]: audit 2026-03-10T11:47:10.907613+0000 mgr.y (mgr.24970) 141 : audit [DBG] from='client.25228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:12.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:12 vm07 bash[46158]: audit 2026-03-10T11:47:10.907613+0000 mgr.y (mgr.24970) 141 : audit [DBG] from='client.25228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:12.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:12 vm07 bash[46158]: audit 2026-03-10T11:47:11.552238+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.105:0/2624885976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:47:12.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:12 vm07 bash[46158]: audit 2026-03-10T11:47:11.552238+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.105:0/2624885976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:47:12.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:12 vm05 bash[22470]: audit 2026-03-10T11:47:10.907613+0000 mgr.y (mgr.24970) 141 : audit [DBG] from='client.25228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:12.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:12 vm05 bash[22470]: audit 2026-03-10T11:47:11.552238+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.105:0/2624885976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:47:12.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:12 vm05 bash[17453]: audit 2026-03-10T11:47:10.907613+0000 mgr.y (mgr.24970) 141 : audit [DBG] from='client.25228 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:12.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:12 vm05 bash[17453]: audit 2026-03-10T11:47:11.552238+0000 mon.a (mon.0) 1228 : audit [DBG] from='client.? 192.168.123.105:0/2624885976' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:47:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:13 vm07 bash[46158]: audit 2026-03-10T11:47:12.073429+0000 mgr.y (mgr.24970) 142 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:13 vm07 bash[46158]: audit 2026-03-10T11:47:12.073429+0000 mgr.y (mgr.24970) 142 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:13 vm07 bash[46158]: audit 2026-03-10T11:47:12.300758+0000 mgr.y (mgr.24970) 143 : audit [DBG] from='client.15336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:13 vm07 bash[46158]: audit 2026-03-10T11:47:12.300758+0000 mgr.y (mgr.24970) 143 : audit [DBG] from='client.15336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:13 vm07 bash[46158]: cluster 2026-03-10T11:47:12.340995+0000 mgr.y (mgr.24970) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:13 vm07 bash[46158]: cluster 2026-03-10T11:47:12.340995+0000 mgr.y (mgr.24970) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:13.538 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:13 vm05 bash[22470]: audit 2026-03-10T11:47:12.073429+0000 mgr.y (mgr.24970) 142 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.538 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:13 vm05 bash[22470]: audit 2026-03-10T11:47:12.300758+0000 mgr.y (mgr.24970) 143 : audit [DBG] from='client.15336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.538 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:13 vm05 bash[22470]: cluster 2026-03-10T11:47:12.340995+0000 mgr.y (mgr.24970) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:13.539 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:13 vm05 bash[17453]: audit 2026-03-10T11:47:12.073429+0000 mgr.y (mgr.24970) 142 : audit [DBG] from='client.15330 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.539 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:13 vm05 bash[17453]: audit 2026-03-10T11:47:12.300758+0000 mgr.y (mgr.24970) 143 : audit [DBG] from='client.15336 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "mon", "hosts": "vm05", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:13.539 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:13 vm05 bash[17453]: cluster 2026-03-10T11:47:12.340995+0000 mgr.y (mgr.24970) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:13.703 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:13.795 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:47:14.338 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (13m) 88s ago 20m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (70s) 28s ago 20m 67.8M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (96s) 88s ago 20m 41.3M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (93s) 28s ago 23m 463M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (10m) 88s ago 24m 518M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (24m) 88s ago 24m 69.3M 2048M 17.2.0 e1d6a67b021e dd0f50543cf6 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (33s) 28s ago 23m 18.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (23m) 88s ago 23m 51.8M 2048M 17.2.0 e1d6a67b021e bd8a00588046 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (13m) 88s ago 20m 7908k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (13m) 28s ago 20m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (23m) 88s ago 23m 53.0M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (22m) 88s ago 22m 55.2M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (22m) 88s ago 22m 51.6M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (22m) 88s ago 22m 54.6M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (22m) 28s ago 22m 54.9M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (21m) 28s ago 21m 51.2M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (21m) 28s ago 21m 50.0M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (21m) 28s ago 21m 52.7M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (95s) 28s ago 20m 40.2M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (20m) 88s ago 20m 87.3M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:47:14.753 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (20m) 28s ago 20m 88.5M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: cephadm 2026-03-10T11:47:13.698343+0000 mgr.y (mgr.24970) 145 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: audit 2026-03-10T11:47:13.703454+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: audit 2026-03-10T11:47:13.707971+0000 mon.c (mon.1) 227 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: audit 2026-03-10T11:47:13.712168+0000 mon.c (mon.1) 228 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: audit 2026-03-10T11:47:13.712934+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: audit 2026-03-10T11:47:13.717532+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: cephadm 2026-03-10T11:47:13.764787+0000 mgr.y (mgr.24970) 146 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: audit 2026-03-10T11:47:14.327757+0000 mgr.y (mgr.24970) 147 : audit [DBG] from='client.15339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:14 vm05 bash[22470]: cluster 2026-03-10T11:47:14.341405+0000 mgr.y (mgr.24970) 148 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: cephadm 2026-03-10T11:47:13.698343+0000 mgr.y (mgr.24970) 145 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: audit 2026-03-10T11:47:13.703454+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: audit 2026-03-10T11:47:13.707971+0000 mon.c (mon.1) 227 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: audit 2026-03-10T11:47:13.712168+0000 mon.c (mon.1) 228 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: audit 2026-03-10T11:47:13.712934+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: audit 2026-03-10T11:47:13.717532+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: cephadm 2026-03-10T11:47:13.764787+0000 mgr.y (mgr.24970) 146 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: audit 2026-03-10T11:47:14.327757+0000 mgr.y (mgr.24970) 147 : audit [DBG] from='client.15339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:14 vm05 bash[17453]: cluster 2026-03-10T11:47:14.341405+0000 mgr.y (mgr.24970) 148 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 1 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "mds": {}, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 12, 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:47:15.010 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: cephadm 2026-03-10T11:47:13.698343+0000 mgr.y (mgr.24970) 145 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: cephadm 2026-03-10T11:47:13.698343+0000 mgr.y (mgr.24970) 145 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.703454+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.703454+0000 mon.a (mon.0) 1229 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.707971+0000 mon.c (mon.1) 227 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.707971+0000 mon.c (mon.1) 227 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.712168+0000 mon.c (mon.1) 228 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.712168+0000 mon.c (mon.1) 228 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:15.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.712934+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.712934+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.717532+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:13.717532+0000 mon.a (mon.0) 1230 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: cephadm 2026-03-10T11:47:13.764787+0000 mgr.y (mgr.24970) 146 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: cephadm 2026-03-10T11:47:13.764787+0000 mgr.y (mgr.24970) 146 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:14.327757+0000 mgr.y (mgr.24970) 147 : audit [DBG] from='client.15339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: audit 2026-03-10T11:47:14.327757+0000 mgr.y (mgr.24970) 147 : audit [DBG] from='client.15339 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: cluster 2026-03-10T11:47:14.341405+0000 mgr.y (mgr.24970) 148 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:47:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:14 vm07 bash[46158]: cluster 2026-03-10T11:47:14.341405+0000 mgr.y (mgr.24970) 148 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 0 B/s wr, 119 op/s 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mon on host(s) vm05", 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "0/2 daemons upgraded", 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading mon daemons", 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:47:15.303 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:14.548124+0000 mgr.y (mgr.24970) 149 : audit [DBG] from='client.34136 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:14.752564+0000 mgr.y (mgr.24970) 150 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.013410+0000 mon.a (mon.0) 1231 : audit [DBG] from='client.? 192.168.123.105:0/2087045839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.277814+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: cephadm 2026-03-10T11:47:15.280946+0000 mgr.y (mgr.24970) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: cephadm 2026-03-10T11:47:15.280999+0000 mgr.y (mgr.24970) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.282057+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.283440+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: cephadm 2026-03-10T11:47:15.284017+0000 mgr.y (mgr.24970) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.288231+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.293058+0000 mon.c (mon.1) 232 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.294259+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: cephadm 2026-03-10T11:47:15.295165+0000 mgr.y (mgr.24970) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:15 vm05 bash[17453]: audit 2026-03-10T11:47:15.305753+0000 mgr.y (mgr.24970) 155 : audit [DBG] from='client.15363 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:14.548124+0000 mgr.y (mgr.24970) 149 : audit [DBG] from='client.34136 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:14.752564+0000 mgr.y (mgr.24970) 150 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.013410+0000 mon.a (mon.0) 1231 : audit [DBG] from='client.? 192.168.123.105:0/2087045839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.277814+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: cephadm 2026-03-10T11:47:15.280946+0000 mgr.y (mgr.24970) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: cephadm 2026-03-10T11:47:15.280999+0000 mgr.y (mgr.24970) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.282057+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.283440+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: cephadm 2026-03-10T11:47:15.284017+0000 mgr.y (mgr.24970) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.288231+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.293058+0000 mon.c (mon.1) 232 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.294259+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: cephadm 2026-03-10T11:47:15.295165+0000 mgr.y (mgr.24970) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T11:47:15.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:15 vm05 bash[22470]: audit 2026-03-10T11:47:15.305753+0000 mgr.y (mgr.24970) 155 : audit [DBG] from='client.15363 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:14.548124+0000 mgr.y (mgr.24970) 149 : audit [DBG] from='client.34136 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:14.548124+0000 mgr.y (mgr.24970) 149 : audit [DBG] from='client.34136 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:14.752564+0000 mgr.y (mgr.24970) 150 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:14.752564+0000 mgr.y (mgr.24970) 150 : audit [DBG] from='client.15351 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.013410+0000 mon.a (mon.0) 1231 : audit [DBG] from='client.? 192.168.123.105:0/2087045839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.013410+0000 mon.a (mon.0) 1231 : audit [DBG] from='client.? 192.168.123.105:0/2087045839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.277814+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.277814+0000 mon.a (mon.0) 1232 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.280946+0000 mgr.y (mgr.24970) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.280946+0000 mgr.y (mgr.24970) 151 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.280999+0000 mgr.y (mgr.24970) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.280999+0000 mgr.y (mgr.24970) 152 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.282057+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:16.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.282057+0000 mon.c (mon.1) 230 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.283440+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.283440+0000 mon.c (mon.1) 231 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.284017+0000 mgr.y (mgr.24970) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.284017+0000 mgr.y (mgr.24970) 153 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.288231+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.288231+0000 mon.a (mon.0) 1233 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.293058+0000 mon.c (mon.1) 232 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.293058+0000 mon.c (mon.1) 232 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.294259+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.294259+0000 mon.c (mon.1) 233 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["c"]}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.295165+0000 mgr.y (mgr.24970) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: cephadm 2026-03-10T11:47:15.295165+0000 mgr.y (mgr.24970) 154 : cephadm [INF] Upgrade: It appears safe to stop mon.c 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.305753+0000 mgr.y (mgr.24970) 155 : audit [DBG] from='client.15363 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:15 vm07 bash[46158]: audit 2026-03-10T11:47:15.305753+0000 mgr.y (mgr.24970) 155 : audit [DBG] from='client.15363 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:16.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.341 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.341 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.341 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.342 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.342 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.342 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.614 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: Stopping Ceph mon.c for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:47:16.614 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 bash[22470]: debug 2026-03-10T11:47:16.392+0000 7f7662d90700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:47:16.614 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 bash[22470]: debug 2026-03-10T11:47:16.392+0000 7f7662d90700 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-10T11:47:16.614 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 bash[53899]: [10/Mar/2026:11:47:16] ENGINE Bus STOPPING 2026-03-10T11:47:16.881 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 bash[65291]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon-c 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.c.service: Deactivated successfully. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: Stopped Ceph mon.c for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 bash[53899]: [10/Mar/2026:11:47:16] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 bash[53899]: [10/Mar/2026:11:47:16] ENGINE Bus STOPPED 2026-03-10T11:47:16.882 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 bash[53899]: [10/Mar/2026:11:47:16] ENGINE Bus STARTING 2026-03-10T11:47:16.882 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:16.882 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 bash[53899]: [10/Mar/2026:11:47:16] ENGINE Serving on http://:::9283 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:16 vm05 bash[53899]: [10/Mar/2026:11:47:16] ENGINE Bus STARTED 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:16 vm05 systemd[1]: Started Ceph mon.c for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.000+0000 7fb3cad09d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.000+0000 7fb3cad09d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.000+0000 7fb3cad09d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.000+0000 7fb3cad09d80 0 load: jerasure load: lrc 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Git sha 0 2026-03-10T11:47:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: DB SUMMARY 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: DB Session ID: 5K3HB0903PRCSQR92RZD 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: MANIFEST file: MANIFEST-000009 size: 2048 Bytes 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 1, files: 000042.sst 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000040.log size: 2517495 ; 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.env: 0x555b75faadc0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.info_log: 0x555ba9fcd7e0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.db_log_dir: 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.wal_dir: 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.write_buffer_manager: 0x555ba9fd1900 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:47:17.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.row_cache: None 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.wal_filter: None 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:47:17.343 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Compression algorithms supported: 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kZSTD supported: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.merge_operator: 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x555ba9fcc3c0) 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: cache_index_and_filter_blocks: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: pin_top_level_index_and_filter: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: index_type: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: data_block_index_type: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: index_shortening: 1 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: checksum: 4 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: no_block_cache: 0 2026-03-10T11:47:17.344 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_cache: 0x555ba9ff3350 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_cache_name: BinnedLRUCache 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_cache_options: 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: capacity : 536870912 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: num_shard_bits : 4 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: strict_capacity_limit : 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: high_pri_pool_ratio: 0.000 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_cache_compressed: (nil) 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: persistent_cache: (nil) 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_size: 4096 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_size_deviation: 10 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_restart_interval: 16 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: index_block_restart_interval: 1 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: metadata_block_size: 4096 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: partition_filters: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: use_delta_encoding: 1 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: filter_policy: bloomfilter 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: whole_key_filtering: 1 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: verify_compression: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: read_amp_bytes_per_bit: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: format_version: 5 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: enable_index_compression: 1 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: block_align: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: max_auto_readahead_size: 262144 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: prepopulate_block_cache: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: initial_auto_readahead_size: 8192 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: num_file_reads_for_auto_readahead: 2 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.num_levels: 7 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:47:17.345 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:47:17.346 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 42.sst 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000009 succeeded,manifest_file_number is 9, next_file_number is 44, last_sequence is 23832, log_number is 40,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 40 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 01b8ee4d-50b1-4242-84b5-867afb57cbea 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143237009660, "job": 1, "event": "recovery_started", "wal_files": [40]} 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.004+0000 7fb3cad09d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #40 mode 2 2026-03-10T11:47:17.347 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.012+0000 7fb3cad09d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143237018782, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 45, "file_size": 1522264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23837, "largest_seqno": 25612, "table_properties": {"data_size": 1515828, "index_size": 3479, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 18953, "raw_average_key_size": 25, "raw_value_size": 1500854, "raw_average_value_size": 2017, "num_data_blocks": 158, "num_entries": 744, "num_filter_entries": 744, "num_deletions": 8, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773143237, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "01b8ee4d-50b1-4242-84b5-867afb57cbea", "db_session_id": "5K3HB0903PRCSQR92RZD", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.012+0000 7fb3cad09d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143237018911, "job": 1, "event": "recovery_finished"} 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.012+0000 7fb3cad09d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 47 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.012+0000 7fb3cad09d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x555ba9ff4e00 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 4 rocksdb: DB pointer 0x555baa100000 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 starting mon.c rank 1 at public addrs [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] at bind addrs [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon_data /var/lib/ceph/mon/ceph-c fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 1 mon.c@-1(???) e3 preinit fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 mon.c@-1(???).mds e1 new map 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 mon.c@-1(???).mds e1 print_map 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: e1 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: btime 1970-01-01T00:00:00:000000+0000 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2} 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: legacy client fscid: -1 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: No filesystems configured 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 mon.c@-1(???).osd e98 crush map has features 3314933000854323200, adjusting msgr requires 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 mon.c@-1(???).osd e98 crush map has features 432629239337189376, adjusting msgr requires 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 mon.c@-1(???).osd e98 crush map has features 432629239337189376, adjusting msgr requires 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.016+0000 7fb3cad09d80 0 mon.c@-1(???).osd e98 crush map has features 432629239337189376, adjusting msgr requires 2026-03-10T11:47:17.348 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:17 vm05 bash[65415]: debug 2026-03-10T11:47:17.020+0000 7fb3cad09d80 1 mon.c@-1(???).paxosservice(auth 1..25) refresh upgraded, format 0 -> 3 2026-03-10T11:47:18.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.231244+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:18.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.231244+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:18.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.234181+0000 mon.a (mon.0) 1235 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:18.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.234181+0000 mon.a (mon.0) 1235 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:18.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.237795+0000 mon.a (mon.0) 1236 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:18.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.237795+0000 mon.a (mon.0) 1236 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244235+0000 mon.a (mon.0) 1237 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244235+0000 mon.a (mon.0) 1237 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244276+0000 mon.a (mon.0) 1238 : cluster [DBG] fsmap 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244276+0000 mon.a (mon.0) 1238 : cluster [DBG] fsmap 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244303+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244303+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244571+0000 mon.a (mon.0) 1240 : cluster [DBG] mgrmap e41: y(active, since 107s), standbys: x 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.244571+0000 mon.a (mon.0) 1240 : cluster [DBG] mgrmap e41: y(active, since 107s), standbys: x 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.250002+0000 mon.a (mon.0) 1241 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: cluster 2026-03-10T11:47:17.250002+0000 mon.a (mon.0) 1241 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: audit 2026-03-10T11:47:17.252887+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: audit 2026-03-10T11:47:17.252887+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: audit 2026-03-10T11:47:17.258013+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: audit 2026-03-10T11:47:17.258013+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: audit 2026-03-10T11:47:17.259083+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:18 vm05 bash[65415]: audit 2026-03-10T11:47:17.259083+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.231244+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.234181+0000 mon.a (mon.0) 1235 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.237795+0000 mon.a (mon.0) 1236 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.244235+0000 mon.a (mon.0) 1237 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.244276+0000 mon.a (mon.0) 1238 : cluster [DBG] fsmap 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.244303+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.244571+0000 mon.a (mon.0) 1240 : cluster [DBG] mgrmap e41: y(active, since 107s), standbys: x 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: cluster 2026-03-10T11:47:17.250002+0000 mon.a (mon.0) 1241 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: audit 2026-03-10T11:47:17.252887+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: audit 2026-03-10T11:47:17.258013+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:18.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:18 vm05 bash[17453]: audit 2026-03-10T11:47:17.259083+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.231244+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.231244+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.234181+0000 mon.a (mon.0) 1235 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.234181+0000 mon.a (mon.0) 1235 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.237795+0000 mon.a (mon.0) 1236 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.237795+0000 mon.a (mon.0) 1236 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244235+0000 mon.a (mon.0) 1237 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244235+0000 mon.a (mon.0) 1237 : cluster [DBG] monmap e3: 3 mons at {a=[v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0],b=[v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0],c=[v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0]} 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244276+0000 mon.a (mon.0) 1238 : cluster [DBG] fsmap 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244276+0000 mon.a (mon.0) 1238 : cluster [DBG] fsmap 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244303+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244303+0000 mon.a (mon.0) 1239 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244571+0000 mon.a (mon.0) 1240 : cluster [DBG] mgrmap e41: y(active, since 107s), standbys: x 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.244571+0000 mon.a (mon.0) 1240 : cluster [DBG] mgrmap e41: y(active, since 107s), standbys: x 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.250002+0000 mon.a (mon.0) 1241 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: cluster 2026-03-10T11:47:17.250002+0000 mon.a (mon.0) 1241 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: audit 2026-03-10T11:47:17.252887+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: audit 2026-03-10T11:47:17.252887+0000 mon.a (mon.0) 1242 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: audit 2026-03-10T11:47:17.258013+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: audit 2026-03-10T11:47:17.258013+0000 mon.b (mon.2) 5 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: audit 2026-03-10T11:47:17.259083+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:18.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:18 vm07 bash[46158]: audit 2026-03-10T11:47:17.259083+0000 mon.a (mon.0) 1243 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:19.255 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:47:18] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-10T11:47:19.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:19 vm05 bash[65415]: cluster 2026-03-10T11:47:18.342394+0000 mgr.y (mgr.24970) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:19.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:19 vm05 bash[65415]: cluster 2026-03-10T11:47:18.342394+0000 mgr.y (mgr.24970) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:19.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:19 vm05 bash[17453]: cluster 2026-03-10T11:47:18.342394+0000 mgr.y (mgr.24970) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:19.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:19 vm07 bash[46158]: cluster 2026-03-10T11:47:18.342394+0000 mgr.y (mgr.24970) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:19.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:19 vm07 bash[46158]: cluster 2026-03-10T11:47:18.342394+0000 mgr.y (mgr.24970) 159 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 73 KiB/s rd, 0 B/s wr, 120 op/s 2026-03-10T11:47:20.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:20 vm05 bash[65415]: audit 2026-03-10T11:47:18.982076+0000 mgr.y (mgr.24970) 160 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:20.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:20 vm05 bash[65415]: audit 2026-03-10T11:47:18.982076+0000 mgr.y (mgr.24970) 160 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:20.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:20 vm05 bash[17453]: audit 2026-03-10T11:47:18.982076+0000 mgr.y (mgr.24970) 160 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:20 vm07 bash[46158]: audit 2026-03-10T11:47:18.982076+0000 mgr.y (mgr.24970) 160 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:20.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:20 vm07 bash[46158]: audit 2026-03-10T11:47:18.982076+0000 mgr.y (mgr.24970) 160 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:21.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:21 vm05 bash[65415]: cluster 2026-03-10T11:47:20.342766+0000 mgr.y (mgr.24970) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s 2026-03-10T11:47:21.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:21 vm05 bash[65415]: cluster 2026-03-10T11:47:20.342766+0000 mgr.y (mgr.24970) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s 2026-03-10T11:47:21.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:21 vm05 bash[17453]: cluster 2026-03-10T11:47:20.342766+0000 mgr.y (mgr.24970) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s 2026-03-10T11:47:21.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:21 vm07 bash[46158]: cluster 2026-03-10T11:47:20.342766+0000 mgr.y (mgr.24970) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s 2026-03-10T11:47:21.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:21 vm07 bash[46158]: cluster 2026-03-10T11:47:20.342766+0000 mgr.y (mgr.24970) 161 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 117 op/s 2026-03-10T11:47:22.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:22 vm05 bash[53899]: debug 2026-03-10T11:47:22.024+0000 7f255e962640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:23 vm05 bash[65415]: cluster 2026-03-10T11:47:22.343427+0000 mgr.y (mgr.24970) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:23 vm05 bash[65415]: cluster 2026-03-10T11:47:22.343427+0000 mgr.y (mgr.24970) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:23 vm05 bash[65415]: audit 2026-03-10T11:47:22.731552+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:23 vm05 bash[65415]: audit 2026-03-10T11:47:22.731552+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:23 vm05 bash[65415]: audit 2026-03-10T11:47:22.739030+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:23 vm05 bash[65415]: audit 2026-03-10T11:47:22.739030+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:23 vm05 bash[17453]: cluster 2026-03-10T11:47:22.343427+0000 mgr.y (mgr.24970) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:23 vm05 bash[17453]: audit 2026-03-10T11:47:22.731552+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:23 vm05 bash[17453]: audit 2026-03-10T11:47:22.739030+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:23 vm07 bash[46158]: cluster 2026-03-10T11:47:22.343427+0000 mgr.y (mgr.24970) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s 2026-03-10T11:47:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:23 vm07 bash[46158]: cluster 2026-03-10T11:47:22.343427+0000 mgr.y (mgr.24970) 162 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s 2026-03-10T11:47:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:23 vm07 bash[46158]: audit 2026-03-10T11:47:22.731552+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:23 vm07 bash[46158]: audit 2026-03-10T11:47:22.731552+0000 mon.a (mon.0) 1244 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:23 vm07 bash[46158]: audit 2026-03-10T11:47:22.739030+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:23 vm07 bash[46158]: audit 2026-03-10T11:47:22.739030+0000 mon.a (mon.0) 1245 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:24 vm07 bash[46158]: audit 2026-03-10T11:47:23.368177+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:24 vm07 bash[46158]: audit 2026-03-10T11:47:23.368177+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:24 vm07 bash[46158]: audit 2026-03-10T11:47:23.374659+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:24 vm07 bash[46158]: audit 2026-03-10T11:47:23.374659+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:24 vm07 bash[46158]: cluster 2026-03-10T11:47:24.343797+0000 mgr.y (mgr.24970) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:24.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:24 vm07 bash[46158]: cluster 2026-03-10T11:47:24.343797+0000 mgr.y (mgr.24970) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:24 vm05 bash[65415]: audit 2026-03-10T11:47:23.368177+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:24 vm05 bash[65415]: audit 2026-03-10T11:47:23.368177+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:24 vm05 bash[65415]: audit 2026-03-10T11:47:23.374659+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:24 vm05 bash[65415]: audit 2026-03-10T11:47:23.374659+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:24 vm05 bash[65415]: cluster 2026-03-10T11:47:24.343797+0000 mgr.y (mgr.24970) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:24 vm05 bash[65415]: cluster 2026-03-10T11:47:24.343797+0000 mgr.y (mgr.24970) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:24 vm05 bash[17453]: audit 2026-03-10T11:47:23.368177+0000 mon.a (mon.0) 1246 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:24 vm05 bash[17453]: audit 2026-03-10T11:47:23.374659+0000 mon.a (mon.0) 1247 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:24.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:24 vm05 bash[17453]: cluster 2026-03-10T11:47:24.343797+0000 mgr.y (mgr.24970) 163 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:26 vm07 bash[46158]: cluster 2026-03-10T11:47:26.344127+0000 mgr.y (mgr.24970) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:26.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:26 vm07 bash[46158]: cluster 2026-03-10T11:47:26.344127+0000 mgr.y (mgr.24970) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:26.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:26 vm05 bash[65415]: cluster 2026-03-10T11:47:26.344127+0000 mgr.y (mgr.24970) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:26.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:26 vm05 bash[65415]: cluster 2026-03-10T11:47:26.344127+0000 mgr.y (mgr.24970) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:26.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:26 vm05 bash[17453]: cluster 2026-03-10T11:47:26.344127+0000 mgr.y (mgr.24970) 164 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: cluster 2026-03-10T11:47:28.344637+0000 mgr.y (mgr.24970) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:28.982931+0000 mon.a (mon.0) 1248 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:28.988998+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:28.990050+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:28.990184+0000 mon.b (mon.2) 7 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:28.997967+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:29.040460+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:29.041877+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:29.042852+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 bash[17453]: audit 2026-03-10T11:47:29.043538+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T11:47:29.158 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:47:28] "GET /metrics HTTP/1.1" 200 37558 "" "Prometheus/2.51.0" 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: cluster 2026-03-10T11:47:28.344637+0000 mgr.y (mgr.24970) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: cluster 2026-03-10T11:47:28.344637+0000 mgr.y (mgr.24970) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.982931+0000 mon.a (mon.0) 1248 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.982931+0000 mon.a (mon.0) 1248 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.988998+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.988998+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.990050+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.990050+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.990184+0000 mon.b (mon.2) 7 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.990184+0000 mon.b (mon.2) 7 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.997967+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:28.997967+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.040460+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:29.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.040460+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.041877+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.041877+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.042852+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.042852+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.043538+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:29 vm07 bash[46158]: audit 2026-03-10T11:47:29.043538+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: cluster 2026-03-10T11:47:28.344637+0000 mgr.y (mgr.24970) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: cluster 2026-03-10T11:47:28.344637+0000 mgr.y (mgr.24970) 165 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.982931+0000 mon.a (mon.0) 1248 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.982931+0000 mon.a (mon.0) 1248 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.988998+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.988998+0000 mon.b (mon.2) 6 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.990050+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.990050+0000 mon.a (mon.0) 1249 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.990184+0000 mon.b (mon.2) 7 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.990184+0000 mon.b (mon.2) 7 : audit [INF] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.997967+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:28.997967+0000 mon.a (mon.0) 1250 : audit [INF] from='mgr.24970 ' entity='mgr.y' 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.040460+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.040460+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.041877+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.041877+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.042852+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.042852+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "quorum_status"}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.043538+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T11:47:29.446 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 bash[65415]: audit 2026-03-10T11:47:29.043538+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon ok-to-stop", "ids": ["a"]}]: dispatch 2026-03-10T11:47:30.048 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.048 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.048 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.048 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.048 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.048 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.048 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.051 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.051 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:47:29 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: Stopping Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:47:30.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[17453]: debug 2026-03-10T11:47:30.084+0000 7fa7d6a42700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:47:30.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[17453]: debug 2026-03-10T11:47:30.084+0000 7fa7d6a42700 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-10T11:47:30.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68852]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon-a 2026-03-10T11:47:30.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service: Deactivated successfully. 2026-03-10T11:47:30.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: Stopped Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:47:30.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.841 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: Started Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.572+0000 7f9309d94d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.572+0000 7f9309d94d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.572+0000 7f9309d94d80 0 pidfile_write: ignore empty --pid-file 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 0 load: jerasure load: lrc 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Git sha 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: DB SUMMARY 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: DB Session ID: FQGFB9WZA6A4AX9MHRGJ 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: CURRENT file: CURRENT 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: MANIFEST file: MANIFEST-000015 size: 2124 Bytes 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000048.sst 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000046.log size: 2129392 ; 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.error_if_exists: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.create_if_missing: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.env: 0x55c5c1cb5dc0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.info_log: 0x55c5f6b3f7e0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.statistics: (nil) 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.use_fsync: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.576+0000 7f9309d94d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.db_log_dir: 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.wal_dir: 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-10T11:47:30.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.write_buffer_manager: 0x55c5f6b43900 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.unordered_write: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.row_cache: None 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.wal_filter: None 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.two_write_queues: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.wal_compression: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.atomic_flush: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_open_files: -1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Compression algorithms supported: 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kZSTD supported: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kXpressCompression supported: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kZlibCompression supported: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-10T11:47:30.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.580+0000 7f9309d94d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.merge_operator: 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_filter: None 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c5f6b3e3c0) 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: cache_index_and_filter_blocks: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: pin_top_level_index_and_filter: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: index_type: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: data_block_index_type: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: index_shortening: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: data_block_hash_table_util_ratio: 0.750000 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: checksum: 4 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: no_block_cache: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_cache: 0x55c5f6b65350 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_cache_name: BinnedLRUCache 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_cache_options: 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: capacity : 536870912 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: num_shard_bits : 4 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: strict_capacity_limit : 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: high_pri_pool_ratio: 0.000 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_cache_compressed: (nil) 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: persistent_cache: (nil) 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_size: 4096 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_size_deviation: 10 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_restart_interval: 16 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: index_block_restart_interval: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: metadata_block_size: 4096 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: partition_filters: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: use_delta_encoding: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: filter_policy: bloomfilter 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: whole_key_filtering: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: verify_compression: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: read_amp_bytes_per_bit: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: format_version: 5 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: enable_index_compression: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: block_align: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: max_auto_readahead_size: 262144 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: prepopulate_block_cache: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: initial_auto_readahead_size: 8192 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: num_file_reads_for_auto_readahead: 2 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression: NoCompression 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.num_levels: 7 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-10T11:47:30.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.584+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.bloom_locality: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.ttl: 2592000 2026-03-10T11:47:30.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.enable_blob_files: false 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.min_blob_size: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 3 rocksdb: [table/block_based/block_based_table_reader.cc:721] At least one SST file opened without unique ID to verify: 48.sst 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000015 succeeded,manifest_file_number is 15, next_file_number is 50, last_sequence is 21970, log_number is 46,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 46 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3aaeb5bb-1366-4a1f-a6d8-6137f3cd1b80 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143250592864, "job": 1, "event": "recovery_started", "wal_files": [46]} 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.588+0000 7f9309d94d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #46 mode 2 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.596+0000 7f9309d94d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143250602886, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 51, "file_size": 1903224, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21971, "largest_seqno": 23845, "table_properties": {"data_size": 1895736, "index_size": 4339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 21023, "raw_average_key_size": 25, "raw_value_size": 1878590, "raw_average_value_size": 2247, "num_data_blocks": 196, "num_entries": 836, "num_filter_entries": 836, "num_deletions": 8, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773143250, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3aaeb5bb-1366-4a1f-a6d8-6137f3cd1b80", "db_session_id": "FQGFB9WZA6A4AX9MHRGJ", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.596+0000 7f9309d94d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773143250602955, "job": 1, "event": "recovery_finished"} 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.596+0000 7f9309d94d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 53 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.596+0000 7f9309d94d80 4 rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.600+0000 7f9309d94d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.600+0000 7f9309d94d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c5f6b66e00 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.600+0000 7f9309d94d80 4 rocksdb: DB pointer 0x55c5f6c72000 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.600+0000 7f92ffb5e640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: debug 2026-03-10T11:47:30.600+0000 7f92ffb5e640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: ** DB Stats ** 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: ** Compaction Stats [default] ** 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: L0 1/0 1.82 MB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 232.7 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: L6 1/0 9.94 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Sum 2/0 11.76 MB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 232.7 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 232.7 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: ** Compaction Stats [default] ** 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 232.7 0.01 0.00 1 0.008 0 0 0.0 0.0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Flush(GB): cumulative 0.002, interval 0.002 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: AddFile(Total Files): cumulative 0, interval 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: AddFile(Keys): cumulative 0, interval 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Cumulative compaction: 0.00 GB write, 92.71 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Interval compaction: 0.00 GB write, 92.71 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Block cache BinnedLRUCache@0x55c5f6b65350#7 capacity: 512.00 MB usage: 51.12 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.4e-05 secs_since: 0 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: Block cache entry stats(count,size,portion): DataBlock(2,12.16 KB,0.00231862%) FilterBlock(2,14.31 KB,0.00272989%) IndexBlock(2,24.66 KB,0.00470281%) Misc(1,0.00 KB,0%) 2026-03-10T11:47:30.847 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:30 vm05 bash[68966]: ** File Read Latency Histogram By Level [default] ** 2026-03-10T11:47:30.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.848 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.848 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.848 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.848 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:30.848 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:47:30 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:47:32.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: ignoring --setuser ceph since I am not root 2026-03-10T11:47:32.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: ignoring --setgroup ceph since I am not root 2026-03-10T11:47:32.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: debug 2026-03-10T11:47:32.106+0000 7f7ab71e5640 1 -- 192.168.123.107:0/1782251696 <== mon.1 v2:192.168.123.105:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55967fecf4a0 con 0x55967fed0800 2026-03-10T11:47:32.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: debug 2026-03-10T11:47:32.170+0000 7f7ab9a42140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:47:32.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: debug 2026-03-10T11:47:32.206+0000 7f7ab9a42140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:47:32.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: debug 2026-03-10T11:47:32.338+0000 7f7ab9a42140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:47:32.591 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:32 vm05 bash[53899]: ignoring --setuser ceph since I am not root 2026-03-10T11:47:32.591 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:32 vm05 bash[53899]: ignoring --setgroup ceph since I am not root 2026-03-10T11:47:32.591 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:32 vm05 bash[53899]: debug 2026-03-10T11:47:32.184+0000 7f279433d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-10T11:47:32.591 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:32 vm05 bash[53899]: debug 2026-03-10T11:47:32.224+0000 7f279433d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-10T11:47:32.591 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:32 vm05 bash[53899]: debug 2026-03-10T11:47:32.344+0000 7f279433d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-10T11:47:32.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:32 vm07 bash[43660]: debug 2026-03-10T11:47:32.626+0000 7f7ab9a42140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:47:33.032 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:32 vm05 bash[53899]: debug 2026-03-10T11:47:32.644+0000 7f279433d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-10T11:47:33.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.140+0000 7f279433d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.248+0000 7f279433d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.036126+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.036126+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.037031+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.037031+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.037337+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.037337+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.037757+0000 mon.b (mon.2) 19 : cluster [INF] mon.b calling monitor election 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.037757+0000 mon.b (mon.2) 19 : cluster [INF] mon.b calling monitor election 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.039233+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.039233+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.039552+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.039552+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.042412+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.042412+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046742+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046742+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046794+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046794+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046830+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-10T11:47:32.027885+0000 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046830+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-10T11:47:32.027885+0000 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046865+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046865+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046903+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046903+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046938+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046938+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046974+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.046974+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047010+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon.c 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047010+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon.c 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047045+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047045+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047427+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047427+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047504+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.047504+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.048765+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e41: y(active, since 2m), standbys: x 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.048765+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e41: y(active, since 2m), standbys: x 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.049331+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.049331+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.061836+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24970 ' entity='' 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: audit 2026-03-10T11:47:32.061836+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24970 ' entity='' 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.067123+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-10T11:47:33.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:33 vm05 bash[65415]: cluster 2026-03-10T11:47:32.067123+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.036126+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.036126+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.037031+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.037031+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.037337+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.037337+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.037757+0000 mon.b (mon.2) 19 : cluster [INF] mon.b calling monitor election 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.037757+0000 mon.b (mon.2) 19 : cluster [INF] mon.b calling monitor election 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.039233+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.039233+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.039552+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.039552+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.042412+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.042412+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046742+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046742+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046794+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046794+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046830+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-10T11:47:32.027885+0000 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046830+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-10T11:47:32.027885+0000 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046865+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046865+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046903+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046903+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046938+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046938+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046974+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.046974+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047010+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon.c 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047010+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon.c 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047045+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047045+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047427+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047427+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047504+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.047504+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.048765+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e41: y(active, since 2m), standbys: x 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.048765+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e41: y(active, since 2m), standbys: x 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.049331+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.049331+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.061836+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24970 ' entity='' 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: audit 2026-03-10T11:47:32.061836+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24970 ' entity='' 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.067123+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-10T11:47:33.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:33 vm05 bash[68966]: cluster 2026-03-10T11:47:32.067123+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-10T11:47:33.349 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.118+0000 7f7ab9a42140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.210+0000 7f7ab9a42140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.036126+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.036126+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.037031+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.037031+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.037337+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.037337+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.24970 192.168.123.105:0/190352726' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.037757+0000 mon.b (mon.2) 19 : cluster [INF] mon.b calling monitor election 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.037757+0000 mon.b (mon.2) 19 : cluster [INF] mon.b calling monitor election 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.039233+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.039233+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.039552+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.039552+0000 mon.a (mon.0) 16 : cluster [INF] mon.a calling monitor election 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.042412+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.042412+0000 mon.a (mon.0) 17 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046742+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046742+0000 mon.a (mon.0) 18 : cluster [DBG] monmap epoch 4 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046794+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046794+0000 mon.a (mon.0) 19 : cluster [DBG] fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046830+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-10T11:47:32.027885+0000 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046830+0000 mon.a (mon.0) 20 : cluster [DBG] last_changed 2026-03-10T11:47:32.027885+0000 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046865+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046865+0000 mon.a (mon.0) 21 : cluster [DBG] created 2026-03-10T11:23:05.182054+0000 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046903+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046903+0000 mon.a (mon.0) 22 : cluster [DBG] min_mon_release 19 (squid) 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046938+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046938+0000 mon.a (mon.0) 23 : cluster [DBG] election_strategy: 1 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046974+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.046974+0000 mon.a (mon.0) 24 : cluster [DBG] 0: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.a 2026-03-10T11:47:33.350 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047010+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon.c 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047010+0000 mon.a (mon.0) 25 : cluster [DBG] 1: [v2:192.168.123.105:3301/0,v1:192.168.123.105:6790/0] mon.c 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047045+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047045+0000 mon.a (mon.0) 26 : cluster [DBG] 2: [v2:192.168.123.107:3300/0,v1:192.168.123.107:6789/0] mon.b 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047427+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047427+0000 mon.a (mon.0) 27 : cluster [DBG] fsmap 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047504+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.047504+0000 mon.a (mon.0) 28 : cluster [DBG] osdmap e98: 8 total, 8 up, 8 in 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.048765+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e41: y(active, since 2m), standbys: x 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.048765+0000 mon.a (mon.0) 29 : cluster [DBG] mgrmap e41: y(active, since 2m), standbys: x 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.049331+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.049331+0000 mon.a (mon.0) 30 : cluster [INF] overall HEALTH_OK 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.061836+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24970 ' entity='' 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: audit 2026-03-10T11:47:32.061836+0000 mon.a (mon.0) 31 : audit [INF] from='mgr.24970 ' entity='' 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.067123+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-10T11:47:33.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:33 vm07 bash[46158]: cluster 2026-03-10T11:47:32.067123+0000 mon.a (mon.0) 32 : cluster [DBG] mgrmap e42: y(active, since 2m), standbys: x 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: from numpy import show_config as show_numpy_config 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.354+0000 7f7ab9a42140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.498+0000 7f7ab9a42140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.542+0000 7f7ab9a42140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:47:33.625 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.578+0000 7f7ab9a42140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:47:33.659 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-10T11:47:33.659 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-10T11:47:33.659 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: from numpy import show_config as show_numpy_config 2026-03-10T11:47:33.659 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.388+0000 7f279433d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-10T11:47:33.659 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.536+0000 7f279433d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-10T11:47:33.659 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.576+0000 7f279433d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-10T11:47:33.660 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.612+0000 7f279433d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-10T11:47:33.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.622+0000 7f7ab9a42140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:47:33.945 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:33 vm07 bash[43660]: debug 2026-03-10T11:47:33.674+0000 7f7ab9a42140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:47:34.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.660+0000 7f279433d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-10T11:47:34.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:33 vm05 bash[53899]: debug 2026-03-10T11:47:33.712+0000 7f279433d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-10T11:47:34.401 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.118+0000 7f7ab9a42140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:47:34.401 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.158+0000 7f7ab9a42140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:47:34.401 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.194+0000 7f7ab9a42140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:47:34.401 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.354+0000 7f7ab9a42140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:47:34.431 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.164+0000 7f279433d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-10T11:47:34.431 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.200+0000 7f279433d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-10T11:47:34.431 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.240+0000 7f279433d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-10T11:47:34.431 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.388+0000 7f279433d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-10T11:47:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.398+0000 7f7ab9a42140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:47:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.438+0000 7f7ab9a42140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:47:34.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.546+0000 7f7ab9a42140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:47:34.734 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.428+0000 7f279433d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-10T11:47:34.734 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.468+0000 7f279433d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-10T11:47:34.734 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.576+0000 7f279433d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:47:35.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.732+0000 7f279433d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:47:35.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.904+0000 7f279433d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:47:35.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.940+0000 7f279433d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:47:35.091 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:34 vm05 bash[53899]: debug 2026-03-10T11:47:34.980+0000 7f279433d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:47:35.112 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.702+0000 7f7ab9a42140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-10T11:47:35.112 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.874+0000 7f7ab9a42140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-10T11:47:35.112 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.906+0000 7f7ab9a42140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-10T11:47:35.113 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:34 vm07 bash[43660]: debug 2026-03-10T11:47:34.950+0000 7f7ab9a42140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-10T11:47:35.366 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: debug 2026-03-10T11:47:35.110+0000 7f7ab9a42140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:47:35.366 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: debug 2026-03-10T11:47:35.358+0000 7f7ab9a42140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:47:35.416 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: debug 2026-03-10T11:47:35.140+0000 7f279433d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-10T11:47:35.416 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: debug 2026-03-10T11:47:35.388+0000 7f279433d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: [10/Mar/2026:11:47:35] ENGINE Bus STARTING 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: CherryPy Checker: 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: The Application mounted at '' has an empty config. 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: [10/Mar/2026:11:47:35] ENGINE Serving on http://:::9283 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:47:35 vm07 bash[43660]: [10/Mar/2026:11:47:35] ENGINE Bus STARTED 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.367670+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.367670+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.367778+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.367778+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.369261+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.369261+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.370067+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.370067+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:47:35.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.372071+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.372071+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.372622+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: audit 2026-03-10T11:47:35.372622+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.394592+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.394592+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.394954+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.394954+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.403395+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.403395+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.403634+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e43: y(active, starting, since 0.00878348s), standbys: x 2026-03-10T11:47:35.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:35 vm07 bash[46158]: cluster 2026-03-10T11:47:35.403634+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e43: y(active, starting, since 0.00878348s), standbys: x 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.367670+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.367670+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.367778+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.367778+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.369261+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.369261+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.370067+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.370067+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.372071+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.372071+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.372622+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: audit 2026-03-10T11:47:35.372622+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.394592+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.394592+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.394954+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T11:47:35.787 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.394954+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.403395+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.403395+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.403634+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e43: y(active, starting, since 0.00878348s), standbys: x 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:35 vm05 bash[65415]: cluster 2026-03-10T11:47:35.403634+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e43: y(active, starting, since 0.00878348s), standbys: x 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.367670+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.367670+0000 mon.a (mon.0) 33 : cluster [DBG] Standby manager daemon x restarted 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.367778+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.367778+0000 mon.a (mon.0) 34 : cluster [DBG] Standby manager daemon x started 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.369261+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.369261+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.370067+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.370067+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.372071+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.372071+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.372622+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: audit 2026-03-10T11:47:35.372622+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.? 192.168.123.107:0/1405838542' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.394592+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.394592+0000 mon.a (mon.0) 35 : cluster [INF] Active manager daemon y restarted 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.394954+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.394954+0000 mon.a (mon.0) 36 : cluster [INF] Activating manager daemon y 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.403395+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.403395+0000 mon.a (mon.0) 37 : cluster [DBG] osdmap e99: 8 total, 8 up, 8 in 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.403634+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e43: y(active, starting, since 0.00878348s), standbys: x 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:35 vm05 bash[68966]: cluster 2026-03-10T11:47:35.403634+0000 mon.a (mon.0) 38 : cluster [DBG] mgrmap e43: y(active, starting, since 0.00878348s), standbys: x 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: [10/Mar/2026:11:47:35] ENGINE Bus STARTING 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: CherryPy Checker: 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: The Application mounted at '' has an empty config. 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: [10/Mar/2026:11:47:35] ENGINE Serving on http://:::9283 2026-03-10T11:47:35.788 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:35 vm05 bash[53899]: [10/Mar/2026:11:47:35] ENGINE Bus STARTED 2026-03-10T11:47:36.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.416177+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:36.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.416177+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:36.695 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.416261+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.416261+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.416312+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.416312+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.427328+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.427328+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.427722+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.427722+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.428164+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.428164+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.428562+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.428562+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.429612+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.429612+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.430009+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.430009+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.431030+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.431030+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.431951+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.431951+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.432607+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.432607+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.433334+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.433334+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.434102+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.434102+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.434757+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.434757+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.435541+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.435541+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: cluster 2026-03-10T11:47:35.443256+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: cluster 2026-03-10T11:47:35.443256+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.472362+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.472362+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.474211+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.474211+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.481228+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.481228+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.481960+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.481960+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.542940+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.542940+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.543833+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:36 vm07 bash[46158]: audit 2026-03-10T11:47:35.543833+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.416177+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.416177+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.416261+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.416261+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.416312+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.416312+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.427328+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.427328+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.427722+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.427722+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.428164+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.428164+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.428562+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.428562+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.429612+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.429612+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.430009+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.430009+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.431030+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.431030+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:47:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.431951+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.431951+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.432607+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.432607+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.433334+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.433334+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.434102+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.434102+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.434757+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.434757+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.435541+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.435541+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: cluster 2026-03-10T11:47:35.443256+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: cluster 2026-03-10T11:47:35.443256+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.472362+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.472362+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.474211+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.474211+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.481228+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.481228+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.481960+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.481960+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.542940+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.542940+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.543833+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:36 vm05 bash[65415]: audit 2026-03-10T11:47:35.543833+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.416177+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.416177+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.416261+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.416261+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.416312+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.416312+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.427328+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.427328+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.427722+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.427722+0000 mon.c (mon.1) 7 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.428164+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.428164+0000 mon.c (mon.1) 8 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.428562+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.428562+0000 mon.c (mon.1) 9 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.429612+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.429612+0000 mon.c (mon.1) 10 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.430009+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.430009+0000 mon.c (mon.1) 11 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.431030+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.431030+0000 mon.c (mon.1) 12 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.431951+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.431951+0000 mon.c (mon.1) 13 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.432607+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.432607+0000 mon.c (mon.1) 14 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.433334+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:47:36.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.433334+0000 mon.c (mon.1) 15 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.434102+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.434102+0000 mon.c (mon.1) 16 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.434757+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.434757+0000 mon.c (mon.1) 17 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.435541+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.435541+0000 mon.c (mon.1) 18 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: cluster 2026-03-10T11:47:35.443256+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: cluster 2026-03-10T11:47:35.443256+0000 mon.a (mon.0) 39 : cluster [INF] Manager daemon y is now available 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.472362+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.472362+0000 mon.c (mon.1) 19 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.474211+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.474211+0000 mon.c (mon.1) 20 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.481228+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.481228+0000 mon.c (mon.1) 21 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.481960+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.481960+0000 mon.a (mon.0) 40 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.542940+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.542940+0000 mon.c (mon.1) 22 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.543833+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:36 vm05 bash[68966]: audit 2026-03-10T11:47:35.543833+0000 mon.a (mon.0) 41 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-10T11:47:36.843 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:36 vm05 bash[53899]: debug 2026-03-10T11:47:36.456+0000 7f27606a9640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cluster 2026-03-10T11:47:36.445848+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e44: y(active, since 1.05098s), standbys: x 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cluster 2026-03-10T11:47:36.445848+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e44: y(active, since 1.05098s), standbys: x 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.634961+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTING 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.634961+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTING 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.744504+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.744504+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.744981+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Client ('192.168.123.105', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.744981+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Client ('192.168.123.105', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.846164+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.846164+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.846199+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTED 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:37 vm05 bash[65415]: cephadm 2026-03-10T11:47:36.846199+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTED 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cluster 2026-03-10T11:47:36.445848+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e44: y(active, since 1.05098s), standbys: x 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cluster 2026-03-10T11:47:36.445848+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e44: y(active, since 1.05098s), standbys: x 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.634961+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTING 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.634961+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTING 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.744504+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.744504+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.744981+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Client ('192.168.123.105', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.744981+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Client ('192.168.123.105', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.846164+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.846164+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.846199+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTED 2026-03-10T11:47:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:37 vm05 bash[68966]: cephadm 2026-03-10T11:47:36.846199+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTED 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cluster 2026-03-10T11:47:36.445848+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e44: y(active, since 1.05098s), standbys: x 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cluster 2026-03-10T11:47:36.445848+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e44: y(active, since 1.05098s), standbys: x 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.634961+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTING 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.634961+0000 mgr.y (mgr.44107) 2 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTING 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.744504+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.744504+0000 mgr.y (mgr.44107) 3 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on https://192.168.123.105:7150 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.744981+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Client ('192.168.123.105', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.744981+0000 mgr.y (mgr.44107) 4 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Client ('192.168.123.105', 35764) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.846164+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.846164+0000 mgr.y (mgr.44107) 5 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Serving on http://192.168.123.105:8765 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.846199+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTED 2026-03-10T11:47:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:37 vm07 bash[46158]: cephadm 2026-03-10T11:47:36.846199+0000 mgr.y (mgr.44107) 6 : cephadm [INF] [10/Mar/2026:11:47:36] ENGINE Bus STARTED 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:38 vm05 bash[65415]: cluster 2026-03-10T11:47:37.425081+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:38 vm05 bash[65415]: cluster 2026-03-10T11:47:37.425081+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:38 vm05 bash[65415]: cluster 2026-03-10T11:47:37.463076+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e45: y(active, since 2s), standbys: x 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:38 vm05 bash[65415]: cluster 2026-03-10T11:47:37.463076+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e45: y(active, since 2s), standbys: x 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:38 vm05 bash[68966]: cluster 2026-03-10T11:47:37.425081+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:38 vm05 bash[68966]: cluster 2026-03-10T11:47:37.425081+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:38 vm05 bash[68966]: cluster 2026-03-10T11:47:37.463076+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e45: y(active, since 2s), standbys: x 2026-03-10T11:47:38.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:38 vm05 bash[68966]: cluster 2026-03-10T11:47:37.463076+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e45: y(active, since 2s), standbys: x 2026-03-10T11:47:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:38 vm07 bash[46158]: cluster 2026-03-10T11:47:37.425081+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:38 vm07 bash[46158]: cluster 2026-03-10T11:47:37.425081+0000 mgr.y (mgr.44107) 7 : cluster [DBG] pgmap v4: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:38 vm07 bash[46158]: cluster 2026-03-10T11:47:37.463076+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e45: y(active, since 2s), standbys: x 2026-03-10T11:47:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:38 vm07 bash[46158]: cluster 2026-03-10T11:47:37.463076+0000 mon.a (mon.0) 43 : cluster [DBG] mgrmap e45: y(active, since 2s), standbys: x 2026-03-10T11:47:39.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:47:38] "GET /metrics HTTP/1.1" 200 34783 "" "Prometheus/2.51.0" 2026-03-10T11:47:39.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:39 vm05 bash[65415]: audit 2026-03-10T11:47:38.999804+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:39.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:39 vm05 bash[65415]: audit 2026-03-10T11:47:38.999804+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:39 vm05 bash[68966]: audit 2026-03-10T11:47:38.999804+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:39 vm05 bash[68966]: audit 2026-03-10T11:47:38.999804+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:39.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:39 vm07 bash[46158]: audit 2026-03-10T11:47:38.999804+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:39.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:39 vm07 bash[46158]: audit 2026-03-10T11:47:38.999804+0000 mgr.y (mgr.44107) 8 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:40 vm05 bash[65415]: cluster 2026-03-10T11:47:39.425343+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:40 vm05 bash[65415]: cluster 2026-03-10T11:47:39.425343+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:40 vm05 bash[65415]: cluster 2026-03-10T11:47:39.469947+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e46: y(active, since 4s), standbys: x 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:40 vm05 bash[65415]: cluster 2026-03-10T11:47:39.469947+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e46: y(active, since 4s), standbys: x 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:40 vm05 bash[68966]: cluster 2026-03-10T11:47:39.425343+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:40 vm05 bash[68966]: cluster 2026-03-10T11:47:39.425343+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:40 vm05 bash[68966]: cluster 2026-03-10T11:47:39.469947+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e46: y(active, since 4s), standbys: x 2026-03-10T11:47:40.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:40 vm05 bash[68966]: cluster 2026-03-10T11:47:39.469947+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e46: y(active, since 4s), standbys: x 2026-03-10T11:47:40.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:40 vm07 bash[46158]: cluster 2026-03-10T11:47:39.425343+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:40.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:40 vm07 bash[46158]: cluster 2026-03-10T11:47:39.425343+0000 mgr.y (mgr.44107) 9 : cluster [DBG] pgmap v5: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:40 vm07 bash[46158]: cluster 2026-03-10T11:47:39.469947+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e46: y(active, since 4s), standbys: x 2026-03-10T11:47:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:40 vm07 bash[46158]: cluster 2026-03-10T11:47:39.469947+0000 mon.a (mon.0) 44 : cluster [DBG] mgrmap e46: y(active, since 4s), standbys: x 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: cluster 2026-03-10T11:47:41.425609+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: cluster 2026-03-10T11:47:41.425609+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.572882+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.572882+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.580151+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.580151+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.613734+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.613734+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.624774+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:41.624774+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.189653+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.189653+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.195567+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.195567+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.219918+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.219918+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.225917+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:42 vm05 bash[65415]: audit 2026-03-10T11:47:42.225917+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: cluster 2026-03-10T11:47:41.425609+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: cluster 2026-03-10T11:47:41.425609+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.572882+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.572882+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.580151+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.580151+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.613734+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.613734+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.624774+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:41.624774+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.189653+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.189653+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.195567+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.195567+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.219918+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.219918+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.225917+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:42 vm05 bash[68966]: audit 2026-03-10T11:47:42.225917+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: cluster 2026-03-10T11:47:41.425609+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: cluster 2026-03-10T11:47:41.425609+0000 mgr.y (mgr.44107) 10 : cluster [DBG] pgmap v6: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.572882+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.572882+0000 mon.a (mon.0) 45 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.580151+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.580151+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.613734+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.613734+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.624774+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:41.624774+0000 mon.a (mon.0) 48 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.189653+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.189653+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.195567+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.195567+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.219918+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.219918+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.225917+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:42 vm07 bash[46158]: audit 2026-03-10T11:47:42.225917+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:44.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:44 vm05 bash[65415]: cluster 2026-03-10T11:47:43.426149+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:47:44.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:44 vm05 bash[65415]: cluster 2026-03-10T11:47:43.426149+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:47:44.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:44 vm05 bash[68966]: cluster 2026-03-10T11:47:43.426149+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:47:44.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:44 vm05 bash[68966]: cluster 2026-03-10T11:47:43.426149+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:47:44.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:44 vm07 bash[46158]: cluster 2026-03-10T11:47:43.426149+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:47:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:44 vm07 bash[46158]: cluster 2026-03-10T11:47:43.426149+0000 mgr.y (mgr.44107) 11 : cluster [DBG] pgmap v7: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-10T11:47:45.546 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (14m) 4s ago 21m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (101s) 4s ago 20m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 4s ago 20m 43.7M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (2m) 4s ago 23m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (11m) 4s ago 24m 508M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (15s) 4s ago 24m 33.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (65s) 4s ago 24m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (29s) 4s ago 24m 33.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (14m) 4s ago 21m 7999k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (14m) 4s ago 21m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (23m) 4s ago 23m 54.3M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (23m) 4s ago 23m 56.5M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (23m) 4s ago 23m 52.0M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (22m) 4s ago 22m 55.9M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (22m) 4s ago 22m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (22m) 4s ago 22m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (22m) 4s ago 22m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (21m) 4s ago 21m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 4s ago 21m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (20m) 4s ago 20m 88.6M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:47:45.952 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (20m) 4s ago 20m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 10, 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:47:46.189 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) mon on host(s) vm05", 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [ 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "mon" 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: ], 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "2/2 daemons upgraded", 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:47:46.394 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:47:46.445 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:47:46 vm07 bash[44829]: logger=infra.usagestats t=2026-03-10T11:47:46.123580021Z level=info msg="Usage stats are ready to report" 2026-03-10T11:47:46.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: cluster 2026-03-10T11:47:45.426490+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:47:46.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: cluster 2026-03-10T11:47:45.426490+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:47:46.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:45.534518+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:46.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:45.534518+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:46.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:45.752185+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.54132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:45.752185+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.54132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:45.951559+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:45.951559+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:46.190662+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.105:0/3109148547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:46 vm07 bash[46158]: audit 2026-03-10T11:47:46.190662+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.105:0/3109148547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: cluster 2026-03-10T11:47:45.426490+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: cluster 2026-03-10T11:47:45.426490+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:45.534518+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:45.534518+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:45.752185+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.54132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:45.752185+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.54132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:45.951559+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:45.951559+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:46.190662+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.105:0/3109148547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:46 vm05 bash[68966]: audit 2026-03-10T11:47:46.190662+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.105:0/3109148547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: cluster 2026-03-10T11:47:45.426490+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:47:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: cluster 2026-03-10T11:47:45.426490+0000 mgr.y (mgr.44107) 12 : cluster [DBG] pgmap v8: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:45.534518+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:45.534518+0000 mgr.y (mgr.44107) 13 : audit [DBG] from='client.44125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:45.752185+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.54132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:45.752185+0000 mgr.y (mgr.44107) 14 : audit [DBG] from='client.54132 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:45.951559+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:45.951559+0000 mgr.y (mgr.44107) 15 : audit [DBG] from='client.34190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:46.190662+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.105:0/3109148547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:46 vm05 bash[65415]: audit 2026-03-10T11:47:46.190662+0000 mon.b (mon.2) 24 : audit [DBG] from='client.? 192.168.123.105:0/3109148547' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:47:49.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:47:48] "GET /metrics HTTP/1.1" 200 34783 "" "Prometheus/2.51.0" 2026-03-10T11:47:49.751 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:49 vm05 bash[65415]: audit 2026-03-10T11:47:46.397467+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.54150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:49.751 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:49 vm05 bash[65415]: audit 2026-03-10T11:47:46.397467+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.54150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:49.751 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:49 vm05 bash[68966]: audit 2026-03-10T11:47:46.397467+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.54150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:49.751 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:49 vm05 bash[68966]: audit 2026-03-10T11:47:46.397467+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.54150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:49.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:49 vm07 bash[46158]: audit 2026-03-10T11:47:46.397467+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.54150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:49.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:49 vm07 bash[46158]: audit 2026-03-10T11:47:46.397467+0000 mgr.y (mgr.44107) 16 : audit [DBG] from='client.54150 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: cluster 2026-03-10T11:47:47.426967+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: cluster 2026-03-10T11:47:47.426967+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:49.007942+0000 mgr.y (mgr.44107) 18 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:49.007942+0000 mgr.y (mgr.44107) 18 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: cluster 2026-03-10T11:47:49.427311+0000 mgr.y (mgr.44107) 19 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: cluster 2026-03-10T11:47:49.427311+0000 mgr.y (mgr.44107) 19 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.424691+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.424691+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.434869+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.434869+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.439381+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.439381+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.638 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.441611+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.441611+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.443964+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.443964+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.450240+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.450240+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.452811+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.452811+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.453013+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.453013+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.454176+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.454176+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.454987+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.454987+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.473195+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:50.639 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:50 vm07 bash[46158]: audit 2026-03-10T11:47:50.473195+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: cluster 2026-03-10T11:47:47.426967+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: cluster 2026-03-10T11:47:47.426967+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:49.007942+0000 mgr.y (mgr.44107) 18 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:49.007942+0000 mgr.y (mgr.44107) 18 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: cluster 2026-03-10T11:47:49.427311+0000 mgr.y (mgr.44107) 19 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: cluster 2026-03-10T11:47:49.427311+0000 mgr.y (mgr.44107) 19 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.424691+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.424691+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.434869+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.434869+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.439381+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.439381+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.441611+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.441611+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.443964+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.443964+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.450240+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.450240+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.452811+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.452811+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.453013+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.453013+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.454176+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.454176+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.454987+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.454987+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.473195+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:50 vm05 bash[65415]: audit 2026-03-10T11:47:50.473195+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: cluster 2026-03-10T11:47:47.426967+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: cluster 2026-03-10T11:47:47.426967+0000 mgr.y (mgr.44107) 17 : cluster [DBG] pgmap v9: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:49.007942+0000 mgr.y (mgr.44107) 18 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:49.007942+0000 mgr.y (mgr.44107) 18 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: cluster 2026-03-10T11:47:49.427311+0000 mgr.y (mgr.44107) 19 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: cluster 2026-03-10T11:47:49.427311+0000 mgr.y (mgr.44107) 19 : cluster [DBG] pgmap v10: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.424691+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.424691+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.434869+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.434869+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.439381+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.439381+0000 mon.c (mon.1) 23 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.441611+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.441611+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.443964+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.443964+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm05", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.450240+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.450240+0000 mon.a (mon.0) 57 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.452811+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.452811+0000 mon.c (mon.1) 24 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.453013+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.453013+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm07", "name": "osd_memory_target"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.454176+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.454176+0000 mon.c (mon.1) 25 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.454987+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.454987+0000 mon.c (mon.1) 26 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.473195+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:50.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:50 vm05 bash[68966]: audit 2026-03-10T11:47:50.473195+0000 mon.c (mon.1) 27 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.455926+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.455926+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.456037+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.456037+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.501732+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.501732+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.510793+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.510793+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.534945+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.534945+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.545938+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.545938+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.591908+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.591908+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.600472+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.600472+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.637053+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.637053+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.646459+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.646459+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.653556+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.653556+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.660181+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.660181+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.666110+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.666110+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.677340+0000 mgr.y (mgr.44107) 28 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.677340+0000 mgr.y (mgr.44107) 28 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.677730+0000 mon.c (mon.1) 28 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.677730+0000 mon.c (mon.1) 28 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.678643+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:50.678643+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.680192+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring daemon osd.3 on vm05 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:50.680192+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring daemon osd.3 on vm05 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.069969+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.069969+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.078821+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.078821+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.082886+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.082886+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.083163+0000 mon.c (mon.1) 30 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.083163+0000 mon.c (mon.1) 30 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:47:51.642 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.084201+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.084201+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.085689+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring daemon osd.2 on vm05 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.085689+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring daemon osd.2 on vm05 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.462711+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.462711+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.469176+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.469176+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.473645+0000 mon.c (mon.1) 32 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.473645+0000 mon.c (mon.1) 32 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.474463+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.474463+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.475340+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.643 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:51 vm05 bash[68966]: audit 2026-03-10T11:47:51.475340+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.455926+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.455926+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.456037+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.456037+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.501732+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.501732+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.510793+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.510793+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.534945+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.534945+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.545938+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.545938+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.591908+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.591908+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.600472+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.600472+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.637053+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.637053+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.646459+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.646459+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.653556+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.653556+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.660181+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.660181+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.666110+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.666110+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.677340+0000 mgr.y (mgr.44107) 28 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.677340+0000 mgr.y (mgr.44107) 28 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.677730+0000 mon.c (mon.1) 28 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.677730+0000 mon.c (mon.1) 28 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.678643+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:50.678643+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.680192+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring daemon osd.3 on vm05 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:50.680192+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring daemon osd.3 on vm05 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.069969+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.069969+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.078821+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.078821+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.082886+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.082886+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.083163+0000 mon.c (mon.1) 30 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.083163+0000 mon.c (mon.1) 30 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.084201+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.084201+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.085689+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring daemon osd.2 on vm05 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.085689+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring daemon osd.2 on vm05 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.462711+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.462711+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.469176+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.469176+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.473645+0000 mon.c (mon.1) 32 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.473645+0000 mon.c (mon.1) 32 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.474463+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.474463+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.475340+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:51.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:51 vm07 bash[46158]: audit 2026-03-10T11:47:51.475340+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.455926+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.455926+0000 mgr.y (mgr.44107) 20 : cephadm [INF] Updating vm05:/etc/ceph/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.456037+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.456037+0000 mgr.y (mgr.44107) 21 : cephadm [INF] Updating vm07:/etc/ceph/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.501732+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.501732+0000 mgr.y (mgr.44107) 22 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.510793+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.510793+0000 mgr.y (mgr.44107) 23 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.conf 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.534945+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.534945+0000 mgr.y (mgr.44107) 24 : cephadm [INF] Updating vm05:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.545938+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.545938+0000 mgr.y (mgr.44107) 25 : cephadm [INF] Updating vm07:/etc/ceph/ceph.client.admin.keyring 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.591908+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:52.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.591908+0000 mgr.y (mgr.44107) 26 : cephadm [INF] Updating vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.600472+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.600472+0000 mgr.y (mgr.44107) 27 : cephadm [INF] Updating vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/config/ceph.client.admin.keyring 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.637053+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.637053+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.646459+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.646459+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.653556+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.653556+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.660181+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.660181+0000 mon.a (mon.0) 62 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.666110+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.666110+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.677340+0000 mgr.y (mgr.44107) 28 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.677340+0000 mgr.y (mgr.44107) 28 : cephadm [INF] Reconfiguring osd.3 (monmap changed)... 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.677730+0000 mon.c (mon.1) 28 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.677730+0000 mon.c (mon.1) 28 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.678643+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:50.678643+0000 mon.c (mon.1) 29 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.680192+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring daemon osd.3 on vm05 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:50.680192+0000 mgr.y (mgr.44107) 29 : cephadm [INF] Reconfiguring daemon osd.3 on vm05 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.069969+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.069969+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.078821+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.078821+0000 mon.a (mon.0) 65 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.082886+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.082886+0000 mgr.y (mgr.44107) 30 : cephadm [INF] Reconfiguring osd.2 (monmap changed)... 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.083163+0000 mon.c (mon.1) 30 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.083163+0000 mon.c (mon.1) 30 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.084201+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.084201+0000 mon.c (mon.1) 31 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.085689+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring daemon osd.2 on vm05 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.085689+0000 mgr.y (mgr.44107) 31 : cephadm [INF] Reconfiguring daemon osd.2 on vm05 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.462711+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.462711+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.469176+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.469176+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.473645+0000 mon.c (mon.1) 32 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.473645+0000 mon.c (mon.1) 32 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.474463+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.474463+0000 mon.c (mon.1) 33 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.475340+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:52.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:51 vm05 bash[65415]: audit 2026-03-10T11:47:51.475340+0000 mon.c (mon.1) 34 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cluster 2026-03-10T11:47:51.427622+0000 mgr.y (mgr.44107) 32 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cluster 2026-03-10T11:47:51.427622+0000 mgr.y (mgr.44107) 32 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.473398+0000 mgr.y (mgr.44107) 33 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.473398+0000 mgr.y (mgr.44107) 33 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.476155+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.476155+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.861684+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.861684+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.871177+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.871177+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.875888+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.875888+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.876514+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.876514+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.894191+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:51.894191+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.896563+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring daemon osd.0 on vm05 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: cephadm 2026-03-10T11:47:51.896563+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring daemon osd.0 on vm05 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.297259+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.297259+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.306594+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.306594+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.311158+0000 mon.c (mon.1) 37 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.311158+0000 mon.c (mon.1) 37 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.312078+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.312078+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.312846+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.312846+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.678028+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.678028+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.687393+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.687393+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.691555+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.691555+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.691805+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.691805+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.693632+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:52 vm07 bash[46158]: audit 2026-03-10T11:47:52.693632+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cluster 2026-03-10T11:47:51.427622+0000 mgr.y (mgr.44107) 32 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cluster 2026-03-10T11:47:51.427622+0000 mgr.y (mgr.44107) 32 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.473398+0000 mgr.y (mgr.44107) 33 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.473398+0000 mgr.y (mgr.44107) 33 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.476155+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.476155+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.861684+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.861684+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.871177+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.871177+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.875888+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.875888+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.876514+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.876514+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.894191+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:51.894191+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.896563+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring daemon osd.0 on vm05 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: cephadm 2026-03-10T11:47:51.896563+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring daemon osd.0 on vm05 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.297259+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.297259+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.306594+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.306594+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.311158+0000 mon.c (mon.1) 37 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.311158+0000 mon.c (mon.1) 37 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.312078+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.312078+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.312846+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.312846+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.678028+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.678028+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.687393+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.687393+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.691555+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.691555+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.691805+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.691805+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.693632+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:52 vm05 bash[68966]: audit 2026-03-10T11:47:52.693632+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cluster 2026-03-10T11:47:51.427622+0000 mgr.y (mgr.44107) 32 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cluster 2026-03-10T11:47:51.427622+0000 mgr.y (mgr.44107) 32 : cluster [DBG] pgmap v11: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.473398+0000 mgr.y (mgr.44107) 33 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.473398+0000 mgr.y (mgr.44107) 33 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.476155+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.476155+0000 mgr.y (mgr.44107) 34 : cephadm [INF] Reconfiguring daemon mon.c on vm05 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.861684+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.861684+0000 mon.a (mon.0) 68 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.871177+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.871177+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.875888+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.875888+0000 mgr.y (mgr.44107) 35 : cephadm [INF] Reconfiguring osd.0 (monmap changed)... 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.876514+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.876514+0000 mon.c (mon.1) 35 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.894191+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:51.894191+0000 mon.c (mon.1) 36 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.896563+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring daemon osd.0 on vm05 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: cephadm 2026-03-10T11:47:51.896563+0000 mgr.y (mgr.44107) 36 : cephadm [INF] Reconfiguring daemon osd.0 on vm05 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.297259+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.297259+0000 mon.a (mon.0) 70 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.306594+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.306594+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.311158+0000 mon.c (mon.1) 37 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.311158+0000 mon.c (mon.1) 37 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.312078+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.312078+0000 mon.c (mon.1) 38 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.312846+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.312846+0000 mon.c (mon.1) 39 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.678028+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.678028+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.687393+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.687393+0000 mon.a (mon.0) 73 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.691555+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.691555+0000 mon.c (mon.1) 40 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.691805+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.691805+0000 mon.a (mon.0) 74 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.693632+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:53.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:52 vm05 bash[65415]: audit 2026-03-10T11:47:52.693632+0000 mon.c (mon.1) 41 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.310836+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.310836+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.313691+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring daemon mon.a on vm05 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.313691+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring daemon mon.a on vm05 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.691195+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring rgw.foo.vm05.fdjkgz (monmap changed)... 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.691195+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring rgw.foo.vm05.fdjkgz (monmap changed)... 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.694499+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:52.694499+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.088456+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.088456+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.095041+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.095041+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.097137+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.097137+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.097367+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.097367+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.097909+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.097909+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.099109+0000 mgr.y (mgr.44107) 42 : cephadm [INF] Reconfiguring daemon osd.1 on vm05 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.099109+0000 mgr.y (mgr.44107) 42 : cephadm [INF] Reconfiguring daemon osd.1 on vm05 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.484169+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.484169+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.489178+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.489178+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.491144+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.491144+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.491318+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.491318+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.491899+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.491899+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.492444+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.492444+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.858339+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.858339+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.864918+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.864918+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.867807+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.867807+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.868352+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:54 vm05 bash[65415]: audit 2026-03-10T11:47:53.868352+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.310836+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.310836+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.313691+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring daemon mon.a on vm05 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.313691+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring daemon mon.a on vm05 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.691195+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring rgw.foo.vm05.fdjkgz (monmap changed)... 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.691195+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring rgw.foo.vm05.fdjkgz (monmap changed)... 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.694499+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:52.694499+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.088456+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.088456+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.095041+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.095041+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.097137+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.097137+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.097367+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.097367+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.097909+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.097909+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.099109+0000 mgr.y (mgr.44107) 42 : cephadm [INF] Reconfiguring daemon osd.1 on vm05 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.099109+0000 mgr.y (mgr.44107) 42 : cephadm [INF] Reconfiguring daemon osd.1 on vm05 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.484169+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.484169+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.489178+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.489178+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.491144+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.491144+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.491318+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.491318+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.491899+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.491899+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.492444+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.492444+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.858339+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.858339+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.864918+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.864918+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.867807+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.867807+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.868352+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:54 vm05 bash[68966]: audit 2026-03-10T11:47:53.868352+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.310836+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.310836+0000 mgr.y (mgr.44107) 37 : cephadm [INF] Reconfiguring mon.a (monmap changed)... 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.313691+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring daemon mon.a on vm05 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.313691+0000 mgr.y (mgr.44107) 38 : cephadm [INF] Reconfiguring daemon mon.a on vm05 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.691195+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring rgw.foo.vm05.fdjkgz (monmap changed)... 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.691195+0000 mgr.y (mgr.44107) 39 : cephadm [INF] Reconfiguring rgw.foo.vm05.fdjkgz (monmap changed)... 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.694499+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:52.694499+0000 mgr.y (mgr.44107) 40 : cephadm [INF] Reconfiguring daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.088456+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.088456+0000 mon.a (mon.0) 75 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.095041+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.095041+0000 mon.a (mon.0) 76 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.097137+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.097137+0000 mgr.y (mgr.44107) 41 : cephadm [INF] Reconfiguring osd.1 (monmap changed)... 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.097367+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.097367+0000 mon.c (mon.1) 42 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.097909+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.097909+0000 mon.c (mon.1) 43 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.099109+0000 mgr.y (mgr.44107) 42 : cephadm [INF] Reconfiguring daemon osd.1 on vm05 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.099109+0000 mgr.y (mgr.44107) 42 : cephadm [INF] Reconfiguring daemon osd.1 on vm05 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.484169+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.484169+0000 mon.a (mon.0) 77 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.489178+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.489178+0000 mon.a (mon.0) 78 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.491144+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.491144+0000 mon.c (mon.1) 44 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.491318+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.491318+0000 mon.a (mon.0) 79 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.491899+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.491899+0000 mon.c (mon.1) 45 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.492444+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.492444+0000 mon.c (mon.1) 46 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.858339+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.858339+0000 mon.a (mon.0) 80 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.864918+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.864918+0000 mon.a (mon.0) 81 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.867807+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.867807+0000 mon.c (mon.1) 47 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.868352+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:54 vm07 bash[46158]: audit 2026-03-10T11:47:53.868352+0000 mon.c (mon.1) 48 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cluster 2026-03-10T11:47:53.428284+0000 mgr.y (mgr.44107) 43 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cluster 2026-03-10T11:47:53.428284+0000 mgr.y (mgr.44107) 43 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.490895+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.490895+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.492958+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.492958+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.867502+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.867502+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.869533+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring daemon osd.4 on vm07 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: cephadm 2026-03-10T11:47:53.869533+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring daemon osd.4 on vm07 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.253001+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.253001+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.260104+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.260104+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.261492+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.261492+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.262430+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.262430+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.643003+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.643003+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.649720+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.649720+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.650671+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.650671+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.651145+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.651145+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.651732+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.651732+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.652180+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.652180+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.991956+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.991956+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.998656+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.998656+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.999649+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:54.999649+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:55.000307+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:55 vm07 bash[46158]: audit 2026-03-10T11:47:55.000307+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cluster 2026-03-10T11:47:53.428284+0000 mgr.y (mgr.44107) 43 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cluster 2026-03-10T11:47:53.428284+0000 mgr.y (mgr.44107) 43 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.490895+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.490895+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.492958+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.492958+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.867502+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.867502+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.869533+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring daemon osd.4 on vm07 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: cephadm 2026-03-10T11:47:53.869533+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring daemon osd.4 on vm07 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.253001+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.253001+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.260104+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.260104+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.261492+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.261492+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.262430+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.262430+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.643003+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.643003+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.649720+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.649720+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.650671+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.650671+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.651145+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.651145+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.651732+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.651732+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.652180+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.652180+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.991956+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.991956+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.998656+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.998656+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.999649+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:54.999649+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:55.000307+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:55 vm05 bash[65415]: audit 2026-03-10T11:47:55.000307+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cluster 2026-03-10T11:47:53.428284+0000 mgr.y (mgr.44107) 43 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cluster 2026-03-10T11:47:53.428284+0000 mgr.y (mgr.44107) 43 : cluster [DBG] pgmap v12: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.490895+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.490895+0000 mgr.y (mgr.44107) 44 : cephadm [INF] Reconfiguring mgr.y (monmap changed)... 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.492958+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.492958+0000 mgr.y (mgr.44107) 45 : cephadm [INF] Reconfiguring daemon mgr.y on vm05 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.867502+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.867502+0000 mgr.y (mgr.44107) 46 : cephadm [INF] Reconfiguring osd.4 (monmap changed)... 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.869533+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring daemon osd.4 on vm07 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: cephadm 2026-03-10T11:47:53.869533+0000 mgr.y (mgr.44107) 47 : cephadm [INF] Reconfiguring daemon osd.4 on vm07 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.253001+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.253001+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.260104+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.260104+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.261492+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.261492+0000 mon.c (mon.1) 49 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.262430+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.262430+0000 mon.c (mon.1) 50 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.643003+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.643003+0000 mon.a (mon.0) 84 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.649720+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.649720+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.650671+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.650671+0000 mon.c (mon.1) 51 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.651145+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.651145+0000 mon.a (mon.0) 86 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.651732+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.651732+0000 mon.c (mon.1) 52 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.652180+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.652180+0000 mon.c (mon.1) 53 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.991956+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.991956+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.998656+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.998656+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.999649+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:54.999649+0000 mon.c (mon.1) 54 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:55.000307+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:55.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:55 vm05 bash[68966]: audit 2026-03-10T11:47:55.000307+0000 mon.c (mon.1) 55 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.261203+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.261203+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.263946+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring daemon osd.5 on vm07 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.263946+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring daemon osd.5 on vm07 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.650472+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.650472+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.652738+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring daemon mgr.x on vm07 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.652738+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring daemon mgr.x on vm07 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.999450+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:54.999450+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:55.001701+0000 mgr.y (mgr.44107) 53 : cephadm [INF] Reconfiguring daemon osd.6 on vm07 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: cephadm 2026-03-10T11:47:55.001701+0000 mgr.y (mgr.44107) 53 : cephadm [INF] Reconfiguring daemon osd.6 on vm07 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.438554+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.438554+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.445030+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.445030+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.446084+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.446084+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.446531+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.446531+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.448185+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:57 vm05 bash[65415]: audit 2026-03-10T11:47:55.448185+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.261203+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.261203+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.263946+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring daemon osd.5 on vm07 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.263946+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring daemon osd.5 on vm07 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.650472+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.650472+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.652738+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring daemon mgr.x on vm07 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.652738+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring daemon mgr.x on vm07 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.999450+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:54.999450+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:55.001701+0000 mgr.y (mgr.44107) 53 : cephadm [INF] Reconfiguring daemon osd.6 on vm07 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: cephadm 2026-03-10T11:47:55.001701+0000 mgr.y (mgr.44107) 53 : cephadm [INF] Reconfiguring daemon osd.6 on vm07 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.438554+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.438554+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.445030+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.445030+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.446084+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.446084+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.446531+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.446531+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.448185+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:57.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:56 vm05 bash[68966]: audit 2026-03-10T11:47:55.448185+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.261203+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.261203+0000 mgr.y (mgr.44107) 48 : cephadm [INF] Reconfiguring osd.5 (monmap changed)... 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.263946+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring daemon osd.5 on vm07 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.263946+0000 mgr.y (mgr.44107) 49 : cephadm [INF] Reconfiguring daemon osd.5 on vm07 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.650472+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.650472+0000 mgr.y (mgr.44107) 50 : cephadm [INF] Reconfiguring mgr.x (monmap changed)... 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.652738+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring daemon mgr.x on vm07 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.652738+0000 mgr.y (mgr.44107) 51 : cephadm [INF] Reconfiguring daemon mgr.x on vm07 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.999450+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:54.999450+0000 mgr.y (mgr.44107) 52 : cephadm [INF] Reconfiguring osd.6 (monmap changed)... 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:55.001701+0000 mgr.y (mgr.44107) 53 : cephadm [INF] Reconfiguring daemon osd.6 on vm07 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: cephadm 2026-03-10T11:47:55.001701+0000 mgr.y (mgr.44107) 53 : cephadm [INF] Reconfiguring daemon osd.6 on vm07 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.438554+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.438554+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.445030+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.445030+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.446084+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.446084+0000 mon.c (mon.1) 56 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.446531+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.446531+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.448185+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:57.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:57 vm07 bash[46158]: audit 2026-03-10T11:47:55.448185+0000 mon.c (mon.1) 57 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: cluster 2026-03-10T11:47:55.428627+0000 mgr.y (mgr.44107) 54 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: cluster 2026-03-10T11:47:55.428627+0000 mgr.y (mgr.44107) 54 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: cephadm 2026-03-10T11:47:55.445857+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring rgw.foo.vm07.mbukmh (monmap changed)... 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: cephadm 2026-03-10T11:47:55.445857+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring rgw.foo.vm07.mbukmh (monmap changed)... 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: cephadm 2026-03-10T11:47:55.448721+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: cephadm 2026-03-10T11:47:55.448721+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.188774+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.188774+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.195404+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.195404+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.196430+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.196430+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.197233+0000 mon.c (mon.1) 59 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.197233+0000 mon.c (mon.1) 59 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.197801+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.197801+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.600531+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.600531+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.610205+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.610205+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.617 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.612273+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.612273+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.613118+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:58 vm05 bash[65415]: audit 2026-03-10T11:47:57.613118+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: cluster 2026-03-10T11:47:55.428627+0000 mgr.y (mgr.44107) 54 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: cluster 2026-03-10T11:47:55.428627+0000 mgr.y (mgr.44107) 54 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: cephadm 2026-03-10T11:47:55.445857+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring rgw.foo.vm07.mbukmh (monmap changed)... 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: cephadm 2026-03-10T11:47:55.445857+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring rgw.foo.vm07.mbukmh (monmap changed)... 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: cephadm 2026-03-10T11:47:55.448721+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: cephadm 2026-03-10T11:47:55.448721+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.188774+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.188774+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.195404+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.195404+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.196430+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.196430+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.197233+0000 mon.c (mon.1) 59 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.197233+0000 mon.c (mon.1) 59 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.197801+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.197801+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.600531+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.600531+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.610205+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.610205+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.612273+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.612273+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.613118+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.618 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:58 vm05 bash[68966]: audit 2026-03-10T11:47:57.613118+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: cluster 2026-03-10T11:47:55.428627+0000 mgr.y (mgr.44107) 54 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: cluster 2026-03-10T11:47:55.428627+0000 mgr.y (mgr.44107) 54 : cluster [DBG] pgmap v13: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: cephadm 2026-03-10T11:47:55.445857+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring rgw.foo.vm07.mbukmh (monmap changed)... 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: cephadm 2026-03-10T11:47:55.445857+0000 mgr.y (mgr.44107) 55 : cephadm [INF] Reconfiguring rgw.foo.vm07.mbukmh (monmap changed)... 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: cephadm 2026-03-10T11:47:55.448721+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: cephadm 2026-03-10T11:47:55.448721+0000 mgr.y (mgr.44107) 56 : cephadm [INF] Reconfiguring daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.188774+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.188774+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.195404+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.195404+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.196430+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.196430+0000 mon.c (mon.1) 58 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.197233+0000 mon.c (mon.1) 59 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.197233+0000 mon.c (mon.1) 59 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.197801+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.197801+0000 mon.c (mon.1) 60 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.600531+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.600531+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.610205+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.610205+0000 mon.a (mon.0) 95 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.612273+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.612273+0000 mon.c (mon.1) 61 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.613118+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:58.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:58 vm07 bash[46158]: audit 2026-03-10T11:47:57.613118+0000 mon.c (mon.1) 62 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:47:59.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:47:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:47:58] "GET /metrics HTTP/1.1" 200 37591 "" "Prometheus/2.51.0" 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.196255+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.196255+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.198350+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.198350+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cluster 2026-03-10T11:47:57.429061+0000 mgr.y (mgr.44107) 59 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cluster 2026-03-10T11:47:57.429061+0000 mgr.y (mgr.44107) 59 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.612030+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.612030+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.614791+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Reconfiguring daemon osd.7 on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:47:59 vm05 bash[65415]: cephadm 2026-03-10T11:47:57.614791+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Reconfiguring daemon osd.7 on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.196255+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.196255+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.198350+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.198350+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cluster 2026-03-10T11:47:57.429061+0000 mgr.y (mgr.44107) 59 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cluster 2026-03-10T11:47:57.429061+0000 mgr.y (mgr.44107) 59 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.612030+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.612030+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.614791+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Reconfiguring daemon osd.7 on vm07 2026-03-10T11:48:00.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:47:59 vm05 bash[68966]: cephadm 2026-03-10T11:47:57.614791+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Reconfiguring daemon osd.7 on vm07 2026-03-10T11:48:00.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.196255+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.196255+0000 mgr.y (mgr.44107) 57 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.198350+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.198350+0000 mgr.y (mgr.44107) 58 : cephadm [INF] Reconfiguring daemon mon.b on vm07 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cluster 2026-03-10T11:47:57.429061+0000 mgr.y (mgr.44107) 59 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cluster 2026-03-10T11:47:57.429061+0000 mgr.y (mgr.44107) 59 : cluster [DBG] pgmap v14: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.612030+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.612030+0000 mgr.y (mgr.44107) 60 : cephadm [INF] Reconfiguring osd.7 (monmap changed)... 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.614791+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Reconfiguring daemon osd.7 on vm07 2026-03-10T11:48:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:47:59 vm07 bash[46158]: cephadm 2026-03-10T11:47:57.614791+0000 mgr.y (mgr.44107) 61 : cephadm [INF] Reconfiguring daemon osd.7 on vm07 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:47:59.015937+0000 mgr.y (mgr.44107) 62 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:47:59.015937+0000 mgr.y (mgr.44107) 62 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: cluster 2026-03-10T11:47:59.429350+0000 mgr.y (mgr.44107) 63 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: cluster 2026-03-10T11:47:59.429350+0000 mgr.y (mgr.44107) 63 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.174562+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.174562+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.183456+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.183456+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.231468+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.231468+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.233153+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.233153+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.234217+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.234217+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.239319+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.239319+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.244334+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.244334+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.244598+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.244598+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.248242+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.248242+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T11:48:01.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.252323+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.252323+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.252548+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.252548+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.255774+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.255774+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.259989+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.259989+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.267968+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.267968+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.269948+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.269948+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.280111+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.280111+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.281933+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.281933+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.286705+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.286705+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.287968+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.287968+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:47:59.015937+0000 mgr.y (mgr.44107) 62 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:47:59.015937+0000 mgr.y (mgr.44107) 62 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: cluster 2026-03-10T11:47:59.429350+0000 mgr.y (mgr.44107) 63 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: cluster 2026-03-10T11:47:59.429350+0000 mgr.y (mgr.44107) 63 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.174562+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.174562+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.183456+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.183456+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.231468+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.231468+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.233153+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.233153+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.234217+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.234217+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.239319+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.239319+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.244334+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.244334+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.244598+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.244598+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.248242+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.248242+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.252323+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.252323+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.252548+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.252548+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.255774+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.255774+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.259989+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.259989+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.267968+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.267968+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.269948+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.269948+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.280111+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.280111+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.281933+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.281933+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.286705+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.286705+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.287968+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.287968+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.289332+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.289332+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.289537+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.289537+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.290137+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.290137+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.290338+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.290338+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.291198+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.291198+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.297110+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.297110+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.299525+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.299525+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.304130+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.304130+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.306759+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.306759+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.311283+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.311283+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.313672+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.313672+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.314969+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.314969+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.315153+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.315153+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.315589+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.315589+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.315788+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.315788+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.316623+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.316623+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.317780+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.317780+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.317940+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.317940+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.318386+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.318386+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.318547+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.318547+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.319367+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.319367+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.320532+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.320532+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.320692+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.320692+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.321122+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.321122+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.321283+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.321283+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.322103+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.322103+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.323251+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.323251+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.323413+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.323413+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.323906+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.323906+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.324065+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.324065+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.324833+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.324833+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.325977+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.325977+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.326156+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.326156+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.326617+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.326617+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.326772+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.326772+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.327539+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.327539+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.328784+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.328784+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.328953+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.328953+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.329405+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.329405+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.329575+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.329575+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.331691+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.331691+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.331887+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.331887+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.337084+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.337084+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:01.094 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.339402+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.339402+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.339688+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.339688+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.367379+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.367379+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.370633+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.370633+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.370868+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.370868+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.373555+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.373555+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.377114+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.377114+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.377445+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.377445+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.378301+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.378301+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.378598+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.378598+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.381579+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.381579+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.386902+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.386902+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.289332+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.289332+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.289537+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.289537+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.290137+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.290137+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.290338+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.290338+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.291198+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.291198+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.297110+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.297110+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.299525+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.299525+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.304130+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.304130+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.306759+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.306759+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.311283+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.311283+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.313672+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.313672+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.314969+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.314969+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.315153+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.315153+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.315589+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.315589+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.315788+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.315788+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.316623+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.316623+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.317780+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.317780+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.317940+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.317940+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.318386+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.318386+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.318547+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.095 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.318547+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.319367+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.319367+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.320532+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.320532+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.320692+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.320692+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.321122+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.321122+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.321283+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.321283+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.322103+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.322103+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.323251+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.323251+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.323413+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.323413+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.323906+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.323906+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.324065+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.324065+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.324833+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.324833+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.325977+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.325977+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.326156+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.326156+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.326617+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.326617+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.326772+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.326772+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.327539+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.327539+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.328784+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.328784+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.328953+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.328953+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.329405+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.329405+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.329575+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.329575+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.331691+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.331691+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.331887+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.331887+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.337084+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.337084+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.339402+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.339402+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.339688+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.339688+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.367379+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.367379+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.370633+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.370633+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.370868+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.370868+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.373555+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.373555+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.377114+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.377114+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.377445+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.377445+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.378301+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.378301+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.378598+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.096 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.378598+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.381579+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.381579+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.386902+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.386902+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.387218+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.387218+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.388127+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.388127+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.388408+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.388408+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.401859+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.401859+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.406754+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.406754+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.407018+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.407018+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.408003+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.408003+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.408212+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.408212+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.410870+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.410870+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.415275+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.415275+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.415510+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.415510+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.416533+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.416533+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.416733+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.416733+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.419283+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.419283+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.424072+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.424072+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.424304+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.424304+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.427011+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.427011+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.431418+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.387218+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.387218+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.388127+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.388127+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.388408+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.388408+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.401859+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.401859+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.406754+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.406754+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.407018+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.407018+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.408003+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.408003+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.408212+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.408212+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.410870+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.431418+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.097 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.431633+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.431633+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.432505+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.432505+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.432693+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.432693+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.433559+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.433559+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.433745+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.433745+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.434609+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.434609+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.434790+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.434790+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.435669+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.435669+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.435849+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.435849+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.436695+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.436695+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.436878+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.436878+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.438008+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.438008+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.438180+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.438180+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.441885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.441885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.445472+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.445472+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.446975+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.446975+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.447870+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.447870+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.452594+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.452594+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.474184+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.474184+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.497652+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.497652+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.499540+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.499540+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.500826+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.500826+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.505784+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:00 vm05 bash[65415]: audit 2026-03-10T11:48:00.505784+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.410870+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.415275+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.415275+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.415510+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.415510+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.416533+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.416533+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.416733+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.416733+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.419283+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.419283+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.424072+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.424072+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.424304+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.098 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.424304+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.427011+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.427011+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.431418+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.431418+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.431633+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.431633+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.432505+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.432505+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.432693+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.432693+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.433559+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.433559+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.433745+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.433745+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.434609+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.434609+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.434790+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.434790+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.435669+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.435669+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.435849+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.435849+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.436695+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.436695+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.436878+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.436878+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.438008+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.438008+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.438180+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.438180+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.441885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.441885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.445472+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.445472+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.446975+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.446975+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.447870+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.447870+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.452594+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.452594+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.474184+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.474184+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.497652+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.497652+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.499540+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.499540+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.500826+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.500826+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.505784+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.099 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:00 vm05 bash[68966]: audit 2026-03-10T11:48:00.505784+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:47:59.015937+0000 mgr.y (mgr.44107) 62 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:47:59.015937+0000 mgr.y (mgr.44107) 62 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: cluster 2026-03-10T11:47:59.429350+0000 mgr.y (mgr.44107) 63 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: cluster 2026-03-10T11:47:59.429350+0000 mgr.y (mgr.44107) 63 : cluster [DBG] pgmap v15: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.174562+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.174562+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.183456+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.183456+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.231468+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.231468+0000 mon.c (mon.1) 63 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.233153+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.233153+0000 mon.c (mon.1) 64 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.234217+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.234217+0000 mon.c (mon.1) 65 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.239319+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.239319+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.244334+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.244334+0000 mon.c (mon.1) 66 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.244598+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.244598+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.248242+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.248242+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]': finished 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.252323+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.252323+0000 mon.c (mon.1) 67 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.252548+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.252548+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.255774+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.255774+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]': finished 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.259989+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.259989+0000 mon.c (mon.1) 68 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.267968+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.267968+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.269948+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.269948+0000 mon.c (mon.1) 69 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.280111+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.280111+0000 mon.a (mon.0) 104 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.281933+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.281933+0000 mon.c (mon.1) 70 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.286705+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.286705+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.287968+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.287968+0000 mon.c (mon.1) 71 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.289332+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.289332+0000 mon.c (mon.1) 72 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.289537+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.289537+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.290137+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.290137+0000 mon.c (mon.1) 73 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.290338+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.290338+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.291198+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.291198+0000 mon.c (mon.1) 74 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.297110+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.297110+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.299525+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.299525+0000 mon.c (mon.1) 75 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.304130+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.304130+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.306759+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.306759+0000 mon.c (mon.1) 76 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.311283+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.311283+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.313672+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.313672+0000 mon.c (mon.1) 77 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.314969+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.314969+0000 mon.c (mon.1) 78 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.315153+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.315153+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.315589+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.315589+0000 mon.c (mon.1) 79 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.315788+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.315788+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.316623+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.316623+0000 mon.c (mon.1) 80 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.317780+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.317780+0000 mon.c (mon.1) 81 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.317940+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.317940+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.318386+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.318386+0000 mon.c (mon.1) 82 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.318547+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.318547+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.319367+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.319367+0000 mon.c (mon.1) 83 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.320532+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.320532+0000 mon.c (mon.1) 84 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.320692+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.320692+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.321122+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.321122+0000 mon.c (mon.1) 85 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.321283+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.321283+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.322103+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.322103+0000 mon.c (mon.1) 86 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.323251+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.323251+0000 mon.c (mon.1) 87 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.323413+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.323413+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.323906+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.323906+0000 mon.c (mon.1) 88 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.324065+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.324065+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.324833+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.324833+0000 mon.c (mon.1) 89 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.325977+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.325977+0000 mon.c (mon.1) 90 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.326156+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.326156+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.326617+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.326617+0000 mon.c (mon.1) 91 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.326772+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.326772+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.327539+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.327539+0000 mon.c (mon.1) 92 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.328784+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.328784+0000 mon.c (mon.1) 93 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.328953+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.328953+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.a"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.329405+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.329405+0000 mon.c (mon.1) 94 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.329575+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.329575+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon.c"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.331691+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.331691+0000 mon.c (mon.1) 95 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.331887+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.331887+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.337084+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.337084+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.339402+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.339402+0000 mon.c (mon.1) 96 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.339688+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.339688+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.367379+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.367379+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.370633+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.370633+0000 mon.c (mon.1) 97 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.370868+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.370868+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.373555+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.373555+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.377114+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.377114+0000 mon.c (mon.1) 98 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.377445+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.377445+0000 mon.a (mon.0) 129 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.378301+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.378301+0000 mon.c (mon.1) 99 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.378598+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.378598+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.381579+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:48:01.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.381579+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.386902+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.386902+0000 mon.c (mon.1) 100 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.387218+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.387218+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.388127+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.388127+0000 mon.c (mon.1) 101 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.388408+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.388408+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.401859+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.401859+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.406754+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.406754+0000 mon.c (mon.1) 102 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.407018+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.407018+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.408003+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.408003+0000 mon.c (mon.1) 103 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.408212+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.408212+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.410870+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.410870+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.415275+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.415275+0000 mon.c (mon.1) 104 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.415510+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.415510+0000 mon.a (mon.0) 138 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.416533+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.416533+0000 mon.c (mon.1) 105 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.416733+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.416733+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.419283+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.419283+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.424072+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.424072+0000 mon.c (mon.1) 106 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.424304+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.424304+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.427011+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.427011+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.431418+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.431418+0000 mon.c (mon.1) 107 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.431633+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.431633+0000 mon.a (mon.0) 143 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.432505+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.432505+0000 mon.c (mon.1) 108 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.432693+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.432693+0000 mon.a (mon.0) 144 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.433559+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.433559+0000 mon.c (mon.1) 109 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.433745+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.433745+0000 mon.a (mon.0) 145 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.434609+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.434609+0000 mon.c (mon.1) 110 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.434790+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.434790+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.435669+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.435669+0000 mon.c (mon.1) 111 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.435849+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.435849+0000 mon.a (mon.0) 147 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.436695+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.436695+0000 mon.c (mon.1) 112 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.436878+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.436878+0000 mon.a (mon.0) 148 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.438008+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.438008+0000 mon.c (mon.1) 113 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.438180+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.438180+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.441885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.441885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.445472+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.445472+0000 mon.c (mon.1) 114 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.446975+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.446975+0000 mon.c (mon.1) 115 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.447870+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.447870+0000 mon.c (mon.1) 116 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.452594+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.452594+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.474184+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.474184+0000 mon.a (mon.0) 152 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.497652+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.497652+0000 mon.c (mon.1) 117 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.499540+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.499540+0000 mon.c (mon.1) 118 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.500826+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.500826+0000 mon.c (mon.1) 119 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.505784+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:01.201 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:00 vm07 bash[46158]: audit 2026-03-10T11:48:00.505784+0000 mon.a (mon.0) 153 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.234971+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.234971+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.260561+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.260561+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.270513+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.270513+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.282470+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.282470+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.288416+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.288416+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.291713+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.291713+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.300028+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.300028+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.307254+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.307254+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.314159+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.314159+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.317087+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.317087+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.319840+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.319840+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.322547+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.322547+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.325287+0000 mgr.y (mgr.44107) 76 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.325287+0000 mgr.y (mgr.44107) 76 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:48:02.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.328028+0000 mgr.y (mgr.44107) 77 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.328028+0000 mgr.y (mgr.44107) 77 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.329990+0000 mgr.y (mgr.44107) 78 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.329990+0000 mgr.y (mgr.44107) 78 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.437575+0000 mgr.y (mgr.44107) 79 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:01 vm05 bash[68966]: cephadm 2026-03-10T11:48:00.437575+0000 mgr.y (mgr.44107) 79 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.234971+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.234971+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.260561+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.260561+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.270513+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.270513+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.282470+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.282470+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.288416+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.288416+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.291713+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.291713+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.300028+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.300028+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.307254+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.307254+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.314159+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.314159+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.317087+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.317087+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.319840+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.319840+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.322547+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.322547+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.325287+0000 mgr.y (mgr.44107) 76 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.325287+0000 mgr.y (mgr.44107) 76 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.328028+0000 mgr.y (mgr.44107) 77 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.328028+0000 mgr.y (mgr.44107) 77 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.329990+0000 mgr.y (mgr.44107) 78 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.329990+0000 mgr.y (mgr.44107) 78 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.437575+0000 mgr.y (mgr.44107) 79 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:02.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:01 vm05 bash[65415]: cephadm 2026-03-10T11:48:00.437575+0000 mgr.y (mgr.44107) 79 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:02.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.234971+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.234971+0000 mgr.y (mgr.44107) 64 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.260561+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.260561+0000 mgr.y (mgr.44107) 65 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.270513+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.270513+0000 mgr.y (mgr.44107) 66 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.282470+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.282470+0000 mgr.y (mgr.44107) 67 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.288416+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.288416+0000 mgr.y (mgr.44107) 68 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.291713+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.291713+0000 mgr.y (mgr.44107) 69 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.300028+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.300028+0000 mgr.y (mgr.44107) 70 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.307254+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.307254+0000 mgr.y (mgr.44107) 71 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.314159+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.314159+0000 mgr.y (mgr.44107) 72 : cephadm [INF] Upgrade: Setting container_image for all node-exporter 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.317087+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.317087+0000 mgr.y (mgr.44107) 73 : cephadm [INF] Upgrade: Setting container_image for all prometheus 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.319840+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.319840+0000 mgr.y (mgr.44107) 74 : cephadm [INF] Upgrade: Setting container_image for all alertmanager 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.322547+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.322547+0000 mgr.y (mgr.44107) 75 : cephadm [INF] Upgrade: Setting container_image for all grafana 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.325287+0000 mgr.y (mgr.44107) 76 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.325287+0000 mgr.y (mgr.44107) 76 : cephadm [INF] Upgrade: Setting container_image for all loki 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.328028+0000 mgr.y (mgr.44107) 77 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.328028+0000 mgr.y (mgr.44107) 77 : cephadm [INF] Upgrade: Setting container_image for all promtail 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.329990+0000 mgr.y (mgr.44107) 78 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.329990+0000 mgr.y (mgr.44107) 78 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.437575+0000 mgr.y (mgr.44107) 79 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:01 vm07 bash[46158]: cephadm 2026-03-10T11:48:00.437575+0000 mgr.y (mgr.44107) 79 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:03.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:02 vm05 bash[68966]: cluster 2026-03-10T11:48:01.429630+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:03.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:02 vm05 bash[68966]: cluster 2026-03-10T11:48:01.429630+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:03.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:02 vm05 bash[65415]: cluster 2026-03-10T11:48:01.429630+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:03.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:02 vm05 bash[65415]: cluster 2026-03-10T11:48:01.429630+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:03.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:02 vm07 bash[46158]: cluster 2026-03-10T11:48:01.429630+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:03.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:02 vm07 bash[46158]: cluster 2026-03-10T11:48:01.429630+0000 mgr.y (mgr.44107) 80 : cluster [DBG] pgmap v16: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:05.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:04 vm05 bash[65415]: cluster 2026-03-10T11:48:03.430076+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:05.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:04 vm05 bash[65415]: cluster 2026-03-10T11:48:03.430076+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:05.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:04 vm05 bash[68966]: cluster 2026-03-10T11:48:03.430076+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:05.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:04 vm05 bash[68966]: cluster 2026-03-10T11:48:03.430076+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:05.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:04 vm07 bash[46158]: cluster 2026-03-10T11:48:03.430076+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:05.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:04 vm07 bash[46158]: cluster 2026-03-10T11:48:03.430076+0000 mgr.y (mgr.44107) 81 : cluster [DBG] pgmap v17: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:06.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:05 vm07 bash[46158]: audit 2026-03-10T11:48:05.472492+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:06.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:05 vm07 bash[46158]: audit 2026-03-10T11:48:05.472492+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:06.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:05 vm05 bash[65415]: audit 2026-03-10T11:48:05.472492+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:06.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:05 vm05 bash[65415]: audit 2026-03-10T11:48:05.472492+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:05 vm05 bash[68966]: audit 2026-03-10T11:48:05.472492+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:05 vm05 bash[68966]: audit 2026-03-10T11:48:05.472492+0000 mon.c (mon.1) 120 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:07 vm07 bash[46158]: cluster 2026-03-10T11:48:05.430340+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:07.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:07 vm07 bash[46158]: cluster 2026-03-10T11:48:05.430340+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:07.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:07 vm05 bash[68966]: cluster 2026-03-10T11:48:05.430340+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:07.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:07 vm05 bash[68966]: cluster 2026-03-10T11:48:05.430340+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:07.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:07 vm05 bash[65415]: cluster 2026-03-10T11:48:05.430340+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:07.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:07 vm05 bash[65415]: cluster 2026-03-10T11:48:05.430340+0000 mgr.y (mgr.44107) 82 : cluster [DBG] pgmap v18: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:09.123 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:48:08] "GET /metrics HTTP/1.1" 200 37589 "" "Prometheus/2.51.0" 2026-03-10T11:48:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:09 vm07 bash[46158]: cluster 2026-03-10T11:48:07.430847+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:09.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:09 vm07 bash[46158]: cluster 2026-03-10T11:48:07.430847+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:09.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:09 vm05 bash[65415]: cluster 2026-03-10T11:48:07.430847+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:09.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:09 vm05 bash[65415]: cluster 2026-03-10T11:48:07.430847+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:09.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:09 vm05 bash[68966]: cluster 2026-03-10T11:48:07.430847+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:09.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:09 vm05 bash[68966]: cluster 2026-03-10T11:48:07.430847+0000 mgr.y (mgr.44107) 83 : cluster [DBG] pgmap v19: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:10 vm07 bash[46158]: audit 2026-03-10T11:48:09.023579+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:10.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:10 vm07 bash[46158]: audit 2026-03-10T11:48:09.023579+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:10.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:10 vm05 bash[65415]: audit 2026-03-10T11:48:09.023579+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:10.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:10 vm05 bash[65415]: audit 2026-03-10T11:48:09.023579+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:10.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:10 vm05 bash[68966]: audit 2026-03-10T11:48:09.023579+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:10.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:10 vm05 bash[68966]: audit 2026-03-10T11:48:09.023579+0000 mgr.y (mgr.44107) 84 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:11 vm07 bash[46158]: cluster 2026-03-10T11:48:09.431122+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:11.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:11 vm07 bash[46158]: cluster 2026-03-10T11:48:09.431122+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:11.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:11 vm05 bash[68966]: cluster 2026-03-10T11:48:09.431122+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:11.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:11 vm05 bash[68966]: cluster 2026-03-10T11:48:09.431122+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:11.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:11 vm05 bash[65415]: cluster 2026-03-10T11:48:09.431122+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:11.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:11 vm05 bash[65415]: cluster 2026-03-10T11:48:09.431122+0000 mgr.y (mgr.44107) 85 : cluster [DBG] pgmap v20: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:12.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:12 vm07 bash[46158]: cluster 2026-03-10T11:48:11.431410+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:12.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:12 vm07 bash[46158]: cluster 2026-03-10T11:48:11.431410+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:12.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:12 vm05 bash[65415]: cluster 2026-03-10T11:48:11.431410+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:12.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:12 vm05 bash[65415]: cluster 2026-03-10T11:48:11.431410+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:12.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:12 vm05 bash[68966]: cluster 2026-03-10T11:48:11.431410+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:12.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:12 vm05 bash[68966]: cluster 2026-03-10T11:48:11.431410+0000 mgr.y (mgr.44107) 86 : cluster [DBG] pgmap v21: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:14 vm05 bash[65415]: cluster 2026-03-10T11:48:13.431851+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:14.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:14 vm05 bash[65415]: cluster 2026-03-10T11:48:13.431851+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:14 vm05 bash[68966]: cluster 2026-03-10T11:48:13.431851+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:14 vm05 bash[68966]: cluster 2026-03-10T11:48:13.431851+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:14.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:14 vm07 bash[46158]: cluster 2026-03-10T11:48:13.431851+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:14.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:14 vm07 bash[46158]: cluster 2026-03-10T11:48:13.431851+0000 mgr.y (mgr.44107) 87 : cluster [DBG] pgmap v22: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:16.659 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:48:16.745 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:16 vm05 bash[68966]: cluster 2026-03-10T11:48:15.432073+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:16.745 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:16 vm05 bash[68966]: cluster 2026-03-10T11:48:15.432073+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:16.745 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:16 vm05 bash[65415]: cluster 2026-03-10T11:48:15.432073+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:16.745 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:16 vm05 bash[65415]: cluster 2026-03-10T11:48:15.432073+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:16.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:16 vm07 bash[46158]: cluster 2026-03-10T11:48:15.432073+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:16.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:16 vm07 bash[46158]: cluster 2026-03-10T11:48:15.432073+0000 mgr.y (mgr.44107) 88 : cluster [DBG] pgmap v23: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:17.079 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (14m) 35s ago 21m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (2m) 35s ago 21m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 35s ago 21m 43.7M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (2m) 35s ago 24m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (11m) 35s ago 25m 508M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (46s) 35s ago 25m 33.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (96s) 35s ago 24m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (60s) 35s ago 24m 33.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (14m) 35s ago 21m 7999k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (14m) 35s ago 21m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (24m) 35s ago 24m 54.3M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (23m) 35s ago 23m 56.5M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (23m) 35s ago 23m 52.0M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (23m) 35s ago 23m 55.9M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (23m) 35s ago 23m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (22m) 35s ago 22m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (22m) 35s ago 22m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (22m) 35s ago 22m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 35s ago 21m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (21m) 35s ago 21m 88.6M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:48:17.080 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (21m) 35s ago 21m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:48:17.127 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mon | length == 1'"'"'' 2026-03-10T11:48:17.589 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:17 vm05 bash[65415]: audit 2026-03-10T11:48:16.592580+0000 mgr.y (mgr.44107) 89 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:17 vm05 bash[65415]: audit 2026-03-10T11:48:16.592580+0000 mgr.y (mgr.44107) 89 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:17 vm05 bash[65415]: audit 2026-03-10T11:48:17.080020+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.34205 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:17 vm05 bash[65415]: audit 2026-03-10T11:48:17.080020+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.34205 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:17 vm05 bash[68966]: audit 2026-03-10T11:48:16.592580+0000 mgr.y (mgr.44107) 89 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:17 vm05 bash[68966]: audit 2026-03-10T11:48:16.592580+0000 mgr.y (mgr.44107) 89 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:17 vm05 bash[68966]: audit 2026-03-10T11:48:17.080020+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.34205 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:17 vm05 bash[68966]: audit 2026-03-10T11:48:17.080020+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.34205 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.626 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.mon | keys'"'"' | grep $sha1' 2026-03-10T11:48:17.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:17 vm07 bash[46158]: audit 2026-03-10T11:48:16.592580+0000 mgr.y (mgr.44107) 89 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:17 vm07 bash[46158]: audit 2026-03-10T11:48:16.592580+0000 mgr.y (mgr.44107) 89 : audit [DBG] from='client.44146 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:17 vm07 bash[46158]: audit 2026-03-10T11:48:17.080020+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.34205 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:17.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:17 vm07 bash[46158]: audit 2026-03-10T11:48:17.080020+0000 mgr.y (mgr.44107) 90 : audit [DBG] from='client.34205 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:18.067 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T11:48:18.103 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 5'"'"'' 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:18 vm05 bash[65415]: cluster 2026-03-10T11:48:17.432484+0000 mgr.y (mgr.44107) 91 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:18 vm05 bash[65415]: cluster 2026-03-10T11:48:17.432484+0000 mgr.y (mgr.44107) 91 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:18 vm05 bash[65415]: audit 2026-03-10T11:48:17.583722+0000 mon.a (mon.0) 154 : audit [DBG] from='client.? 192.168.123.105:0/2241077004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:18 vm05 bash[65415]: audit 2026-03-10T11:48:17.583722+0000 mon.a (mon.0) 154 : audit [DBG] from='client.? 192.168.123.105:0/2241077004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:18 vm05 bash[65415]: audit 2026-03-10T11:48:18.062120+0000 mon.a (mon.0) 155 : audit [DBG] from='client.? 192.168.123.105:0/2189668411' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:18 vm05 bash[65415]: audit 2026-03-10T11:48:18.062120+0000 mon.a (mon.0) 155 : audit [DBG] from='client.? 192.168.123.105:0/2189668411' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:18 vm05 bash[68966]: cluster 2026-03-10T11:48:17.432484+0000 mgr.y (mgr.44107) 91 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:18 vm05 bash[68966]: cluster 2026-03-10T11:48:17.432484+0000 mgr.y (mgr.44107) 91 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:18 vm05 bash[68966]: audit 2026-03-10T11:48:17.583722+0000 mon.a (mon.0) 154 : audit [DBG] from='client.? 192.168.123.105:0/2241077004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:18 vm05 bash[68966]: audit 2026-03-10T11:48:17.583722+0000 mon.a (mon.0) 154 : audit [DBG] from='client.? 192.168.123.105:0/2241077004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:18 vm05 bash[68966]: audit 2026-03-10T11:48:18.062120+0000 mon.a (mon.0) 155 : audit [DBG] from='client.? 192.168.123.105:0/2189668411' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:18 vm05 bash[68966]: audit 2026-03-10T11:48:18.062120+0000 mon.a (mon.0) 155 : audit [DBG] from='client.? 192.168.123.105:0/2189668411' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:18 vm07 bash[46158]: cluster 2026-03-10T11:48:17.432484+0000 mgr.y (mgr.44107) 91 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:18.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:18 vm07 bash[46158]: cluster 2026-03-10T11:48:17.432484+0000 mgr.y (mgr.44107) 91 : cluster [DBG] pgmap v24: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:18.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:18 vm07 bash[46158]: audit 2026-03-10T11:48:17.583722+0000 mon.a (mon.0) 154 : audit [DBG] from='client.? 192.168.123.105:0/2241077004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:18 vm07 bash[46158]: audit 2026-03-10T11:48:17.583722+0000 mon.a (mon.0) 154 : audit [DBG] from='client.? 192.168.123.105:0/2241077004' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:18 vm07 bash[46158]: audit 2026-03-10T11:48:18.062120+0000 mon.a (mon.0) 155 : audit [DBG] from='client.? 192.168.123.105:0/2189668411' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:18.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:18 vm07 bash[46158]: audit 2026-03-10T11:48:18.062120+0000 mon.a (mon.0) 155 : audit [DBG] from='client.? 192.168.123.105:0/2189668411' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:19.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:48:18] "GET /metrics HTTP/1.1" 200 37589 "" "Prometheus/2.51.0" 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:19 vm05 bash[68966]: audit 2026-03-10T11:48:18.506185+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:19 vm05 bash[68966]: audit 2026-03-10T11:48:18.506185+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:19 vm05 bash[68966]: audit 2026-03-10T11:48:19.025023+0000 mgr.y (mgr.44107) 93 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:19 vm05 bash[68966]: audit 2026-03-10T11:48:19.025023+0000 mgr.y (mgr.44107) 93 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:19 vm05 bash[65415]: audit 2026-03-10T11:48:18.506185+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:19 vm05 bash[65415]: audit 2026-03-10T11:48:18.506185+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:19 vm05 bash[65415]: audit 2026-03-10T11:48:19.025023+0000 mgr.y (mgr.44107) 93 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:19.770 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:19 vm05 bash[65415]: audit 2026-03-10T11:48:19.025023+0000 mgr.y (mgr.44107) 93 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:19.931 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:48:19.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:19 vm07 bash[46158]: audit 2026-03-10T11:48:18.506185+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:19.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:19 vm07 bash[46158]: audit 2026-03-10T11:48:18.506185+0000 mgr.y (mgr.44107) 92 : audit [DBG] from='client.44164 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:19.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:19 vm07 bash[46158]: audit 2026-03-10T11:48:19.025023+0000 mgr.y (mgr.44107) 93 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:19.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:19 vm07 bash[46158]: audit 2026-03-10T11:48:19.025023+0000 mgr.y (mgr.44107) 93 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:19.988 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:48:20.386 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:48:20.430 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:20 vm05 bash[65415]: cluster 2026-03-10T11:48:19.432744+0000 mgr.y (mgr.44107) 94 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:20 vm05 bash[65415]: cluster 2026-03-10T11:48:19.432744+0000 mgr.y (mgr.44107) 94 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:20 vm05 bash[65415]: audit 2026-03-10T11:48:20.472931+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:20 vm05 bash[65415]: audit 2026-03-10T11:48:20.472931+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:20 vm05 bash[68966]: cluster 2026-03-10T11:48:19.432744+0000 mgr.y (mgr.44107) 94 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:20 vm05 bash[68966]: cluster 2026-03-10T11:48:19.432744+0000 mgr.y (mgr.44107) 94 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:20 vm05 bash[68966]: audit 2026-03-10T11:48:20.472931+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:20.659 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:20 vm05 bash[68966]: audit 2026-03-10T11:48:20.472931+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:20.893 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:48:20.939 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types osd --limit 2' 2026-03-10T11:48:20.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:20 vm07 bash[46158]: cluster 2026-03-10T11:48:19.432744+0000 mgr.y (mgr.44107) 94 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:20.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:20 vm07 bash[46158]: cluster 2026-03-10T11:48:19.432744+0000 mgr.y (mgr.44107) 94 : cluster [DBG] pgmap v25: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:20.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:20 vm07 bash[46158]: audit 2026-03-10T11:48:20.472931+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:20.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:20 vm07 bash[46158]: audit 2026-03-10T11:48:20.472931+0000 mon.c (mon.1) 121 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:21 vm05 bash[65415]: audit 2026-03-10T11:48:20.389704+0000 mgr.y (mgr.44107) 95 : audit [DBG] from='client.44170 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:21 vm05 bash[65415]: audit 2026-03-10T11:48:20.389704+0000 mgr.y (mgr.44107) 95 : audit [DBG] from='client.44170 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:21 vm05 bash[65415]: audit 2026-03-10T11:48:20.897273+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.105:0/1665250472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:21 vm05 bash[65415]: audit 2026-03-10T11:48:20.897273+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.105:0/1665250472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:21 vm05 bash[68966]: audit 2026-03-10T11:48:20.389704+0000 mgr.y (mgr.44107) 95 : audit [DBG] from='client.44170 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:21 vm05 bash[68966]: audit 2026-03-10T11:48:20.389704+0000 mgr.y (mgr.44107) 95 : audit [DBG] from='client.44170 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:21 vm05 bash[68966]: audit 2026-03-10T11:48:20.897273+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.105:0/1665250472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:48:21.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:21 vm05 bash[68966]: audit 2026-03-10T11:48:20.897273+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.105:0/1665250472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:48:21.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:21 vm07 bash[46158]: audit 2026-03-10T11:48:20.389704+0000 mgr.y (mgr.44107) 95 : audit [DBG] from='client.44170 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:21.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:21 vm07 bash[46158]: audit 2026-03-10T11:48:20.389704+0000 mgr.y (mgr.44107) 95 : audit [DBG] from='client.44170 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:21.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:21 vm07 bash[46158]: audit 2026-03-10T11:48:20.897273+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.105:0/1665250472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:48:21.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:21 vm07 bash[46158]: audit 2026-03-10T11:48:20.897273+0000 mon.a (mon.0) 156 : audit [DBG] from='client.? 192.168.123.105:0/1665250472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:48:22.724 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:22 vm05 bash[65415]: audit 2026-03-10T11:48:21.361846+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:22 vm05 bash[65415]: audit 2026-03-10T11:48:21.361846+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:22 vm05 bash[65415]: cluster 2026-03-10T11:48:21.433100+0000 mgr.y (mgr.44107) 97 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:22 vm05 bash[65415]: cluster 2026-03-10T11:48:21.433100+0000 mgr.y (mgr.44107) 97 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:22 vm05 bash[68966]: audit 2026-03-10T11:48:21.361846+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:22 vm05 bash[68966]: audit 2026-03-10T11:48:21.361846+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:22 vm05 bash[68966]: cluster 2026-03-10T11:48:21.433100+0000 mgr.y (mgr.44107) 97 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:22.780 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:22 vm05 bash[68966]: cluster 2026-03-10T11:48:21.433100+0000 mgr.y (mgr.44107) 97 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:22.812 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:48:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:22 vm07 bash[46158]: audit 2026-03-10T11:48:21.361846+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:22 vm07 bash[46158]: audit 2026-03-10T11:48:21.361846+0000 mgr.y (mgr.44107) 96 : audit [DBG] from='client.54198 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "osd", "limit": 2, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:22 vm07 bash[46158]: cluster 2026-03-10T11:48:21.433100+0000 mgr.y (mgr.44107) 97 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:22.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:22 vm07 bash[46158]: cluster 2026-03-10T11:48:21.433100+0000 mgr.y (mgr.44107) 97 : cluster [DBG] pgmap v26: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:23.270 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (14m) 42s ago 21m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (2m) 42s ago 21m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (2m) 42s ago 21m 43.7M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (2m) 42s ago 24m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (12m) 42s ago 25m 508M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (53s) 42s ago 25m 33.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (102s) 42s ago 24m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (66s) 42s ago 24m 33.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (14m) 42s ago 22m 7999k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (14m) 42s ago 22m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (24m) 42s ago 24m 54.3M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (24m) 42s ago 24m 56.5M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (23m) 42s ago 23m 52.0M 4096M 17.2.0 e1d6a67b021e 561729c88c06 2026-03-10T11:48:23.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (23m) 42s ago 23m 55.9M 4096M 17.2.0 e1d6a67b021e 56034d2898b8 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (23m) 42s ago 23m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (22m) 42s ago 22m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (22m) 42s ago 22m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (22m) 42s ago 22m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (2m) 42s ago 21m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (21m) 42s ago 21m 88.6M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:48:23.642 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (21m) 42s ago 21m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 10, 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:48:23.874 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: cephadm 2026-03-10T11:48:22.719992+0000 mgr.y (mgr.44107) 98 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: cephadm 2026-03-10T11:48:22.719992+0000 mgr.y (mgr.44107) 98 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.724772+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.724772+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.726283+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.726283+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.729969+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.729969+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.053 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.731878+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.731878+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.736601+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: audit 2026-03-10T11:48:22.736601+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: cephadm 2026-03-10T11:48:22.781169+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:23 vm05 bash[65415]: cephadm 2026-03-10T11:48:22.781169+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: cephadm 2026-03-10T11:48:22.719992+0000 mgr.y (mgr.44107) 98 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: cephadm 2026-03-10T11:48:22.719992+0000 mgr.y (mgr.44107) 98 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.724772+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.724772+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.726283+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.726283+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.729969+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.729969+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.731878+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.731878+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.736601+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: audit 2026-03-10T11:48:22.736601+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: cephadm 2026-03-10T11:48:22.781169+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.054 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:23 vm05 bash[68966]: cephadm 2026-03-10T11:48:22.781169+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) osd. Upgrade limited to 2 daemons (2 remaining).", 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:48:24.082 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:48:24.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: cephadm 2026-03-10T11:48:22.719992+0000 mgr.y (mgr.44107) 98 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: cephadm 2026-03-10T11:48:22.719992+0000 mgr.y (mgr.44107) 98 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.724772+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.724772+0000 mon.a (mon.0) 157 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.726283+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.726283+0000 mon.c (mon.1) 122 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.729969+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.729969+0000 mon.c (mon.1) 123 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.731878+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.731878+0000 mon.c (mon.1) 124 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.736601+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: audit 2026-03-10T11:48:22.736601+0000 mon.a (mon.0) 158 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: cephadm 2026-03-10T11:48:22.781169+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:23 vm07 bash[46158]: cephadm 2026-03-10T11:48:22.781169+0000 mgr.y (mgr.44107) 99 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.264290+0000 mgr.y (mgr.44107) 100 : audit [DBG] from='client.54204 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.264290+0000 mgr.y (mgr.44107) 100 : audit [DBG] from='client.54204 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: cluster 2026-03-10T11:48:23.433564+0000 mgr.y (mgr.44107) 101 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: cluster 2026-03-10T11:48:23.433564+0000 mgr.y (mgr.44107) 101 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.454644+0000 mgr.y (mgr.44107) 102 : audit [DBG] from='client.44194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.454644+0000 mgr.y (mgr.44107) 102 : audit [DBG] from='client.44194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.641684+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.641684+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.877371+0000 mon.c (mon.1) 125 : audit [DBG] from='client.? 192.168.123.105:0/3648212084' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:23.877371+0000 mon.c (mon.1) 125 : audit [DBG] from='client.? 192.168.123.105:0/3648212084' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.083269+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.44209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.083269+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.44209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.225512+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.225512+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.227958+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.227958+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.229354+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.229354+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.233401+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.233401+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.236966+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.236966+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.241264+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.241264+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.244291+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.244291+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.248596+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.248596+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.252612+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.252612+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.659653+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.659653+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.662001+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.662001+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.662866+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:24 vm05 bash[65415]: audit 2026-03-10T11:48:24.662866+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.264290+0000 mgr.y (mgr.44107) 100 : audit [DBG] from='client.54204 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.264290+0000 mgr.y (mgr.44107) 100 : audit [DBG] from='client.54204 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: cluster 2026-03-10T11:48:23.433564+0000 mgr.y (mgr.44107) 101 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: cluster 2026-03-10T11:48:23.433564+0000 mgr.y (mgr.44107) 101 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.454644+0000 mgr.y (mgr.44107) 102 : audit [DBG] from='client.44194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.454644+0000 mgr.y (mgr.44107) 102 : audit [DBG] from='client.44194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.641684+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.641684+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.877371+0000 mon.c (mon.1) 125 : audit [DBG] from='client.? 192.168.123.105:0/3648212084' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:23.877371+0000 mon.c (mon.1) 125 : audit [DBG] from='client.? 192.168.123.105:0/3648212084' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.083269+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.44209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.083269+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.44209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.225512+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.225512+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.227958+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.227958+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.229354+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.229354+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.233401+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.233401+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.236966+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.236966+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.241264+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.241264+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.244291+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.244291+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.248596+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.248596+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.252612+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.252612+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.659653+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.659653+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.662001+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.662001+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.662866+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:24.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:24 vm05 bash[68966]: audit 2026-03-10T11:48:24.662866+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:25.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.264290+0000 mgr.y (mgr.44107) 100 : audit [DBG] from='client.54204 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.264290+0000 mgr.y (mgr.44107) 100 : audit [DBG] from='client.54204 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: cluster 2026-03-10T11:48:23.433564+0000 mgr.y (mgr.44107) 101 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: cluster 2026-03-10T11:48:23.433564+0000 mgr.y (mgr.44107) 101 : cluster [DBG] pgmap v27: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.454644+0000 mgr.y (mgr.44107) 102 : audit [DBG] from='client.44194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.454644+0000 mgr.y (mgr.44107) 102 : audit [DBG] from='client.44194 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.641684+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.641684+0000 mgr.y (mgr.44107) 103 : audit [DBG] from='client.54213 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.877371+0000 mon.c (mon.1) 125 : audit [DBG] from='client.? 192.168.123.105:0/3648212084' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:23.877371+0000 mon.c (mon.1) 125 : audit [DBG] from='client.? 192.168.123.105:0/3648212084' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.083269+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.44209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.083269+0000 mgr.y (mgr.44107) 104 : audit [DBG] from='client.44209 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.225512+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.225512+0000 mon.a (mon.0) 159 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.227958+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.227958+0000 mon.c (mon.1) 126 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.229354+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.229354+0000 mon.c (mon.1) 127 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.233401+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.233401+0000 mon.a (mon.0) 160 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.236966+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.236966+0000 mon.c (mon.1) 128 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.241264+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.241264+0000 mon.a (mon.0) 161 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.244291+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.244291+0000 mon.c (mon.1) 129 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.248596+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.248596+0000 mon.a (mon.0) 162 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.252612+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.252612+0000 mon.c (mon.1) 130 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.659653+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.659653+0000 mon.a (mon.0) 163 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.662001+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.662001+0000 mon.c (mon.1) 131 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.662866+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:25.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:24 vm07 bash[46158]: audit 2026-03-10T11:48:24.662866+0000 mon.c (mon.1) 132 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:25.473 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.473 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.473 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.474 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.474 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.474 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.474 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.474 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: Stopping Ceph osd.3 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:48:25.474 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.474 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:48:25 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:25.749 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:25 vm05 bash[34644]: debug 2026-03-10T11:48:25.504+0000 7fbfdf660700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:48:25.749 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:25 vm05 bash[34644]: debug 2026-03-10T11:48:25.504+0000 7fbfdf660700 -1 osd.3 99 *** Got signal Terminated *** 2026-03-10T11:48:25.749 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:25 vm05 bash[34644]: debug 2026-03-10T11:48:25.504+0000 7fbfdf660700 -1 osd.3 99 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.226905+0000 mgr.y (mgr.44107) 105 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.226905+0000 mgr.y (mgr.44107) 105 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.226930+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.226930+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.230057+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.230057+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.237744+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.237744+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.244998+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.244998+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: audit 2026-03-10T11:48:24.252758+0000 mgr.y (mgr.44107) 110 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: audit 2026-03-10T11:48:24.252758+0000 mgr.y (mgr.44107) 110 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.253669+0000 mgr.y (mgr.44107) 111 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.253669+0000 mgr.y (mgr.44107) 111 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.652973+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.652973+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.664402+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cephadm 2026-03-10T11:48:24.664402+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cluster 2026-03-10T11:48:25.512197+0000 mon.a (mon.0) 164 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:25 vm05 bash[68966]: cluster 2026-03-10T11:48:25.512197+0000 mon.a (mon.0) 164 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.226905+0000 mgr.y (mgr.44107) 105 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.226905+0000 mgr.y (mgr.44107) 105 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.226930+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.226930+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.230057+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.230057+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.237744+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.237744+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.244998+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.244998+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: audit 2026-03-10T11:48:24.252758+0000 mgr.y (mgr.44107) 110 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:26.077 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: audit 2026-03-10T11:48:24.252758+0000 mgr.y (mgr.44107) 110 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.253669+0000 mgr.y (mgr.44107) 111 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.253669+0000 mgr.y (mgr.44107) 111 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.652973+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.652973+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.664402+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cephadm 2026-03-10T11:48:24.664402+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cluster 2026-03-10T11:48:25.512197+0000 mon.a (mon.0) 164 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T11:48:26.078 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:25 vm05 bash[65415]: cluster 2026-03-10T11:48:25.512197+0000 mon.a (mon.0) 164 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T11:48:26.078 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:25 vm05 bash[75293]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-3 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.226905+0000 mgr.y (mgr.44107) 105 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.226905+0000 mgr.y (mgr.44107) 105 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.226930+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.226930+0000 mgr.y (mgr.44107) 106 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.230057+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.230057+0000 mgr.y (mgr.44107) 107 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.237744+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.237744+0000 mgr.y (mgr.44107) 108 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:48:26.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.244998+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.244998+0000 mgr.y (mgr.44107) 109 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: audit 2026-03-10T11:48:24.252758+0000 mgr.y (mgr.44107) 110 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: audit 2026-03-10T11:48:24.252758+0000 mgr.y (mgr.44107) 110 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["3"], "max": 16}]: dispatch 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.253669+0000 mgr.y (mgr.44107) 111 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.253669+0000 mgr.y (mgr.44107) 111 : cephadm [INF] Upgrade: osd.3 is safe to restart 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.652973+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.652973+0000 mgr.y (mgr.44107) 112 : cephadm [INF] Upgrade: Updating osd.3 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.664402+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cephadm 2026-03-10T11:48:24.664402+0000 mgr.y (mgr.44107) 113 : cephadm [INF] Deploying daemon osd.3 on vm05 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cluster 2026-03-10T11:48:25.512197+0000 mon.a (mon.0) 164 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T11:48:26.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:25 vm07 bash[46158]: cluster 2026-03-10T11:48:25.512197+0000 mon.a (mon.0) 164 : cluster [INF] osd.3 marked itself down and dead 2026-03-10T11:48:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.3.service: Deactivated successfully. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: Stopped Ceph osd.3 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:48:26.341 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.342 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.342 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:26.750 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:26 vm05 systemd[1]: Started Ceph osd.3 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:48:26.750 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:26 vm05 bash[75504]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:26.750 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:26 vm05 bash[75504]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: cluster 2026-03-10T11:48:25.433852+0000 mgr.y (mgr.44107) 114 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: cluster 2026-03-10T11:48:25.433852+0000 mgr.y (mgr.44107) 114 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: cluster 2026-03-10T11:48:25.744600+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: cluster 2026-03-10T11:48:25.744600+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: cluster 2026-03-10T11:48:25.757289+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: cluster 2026-03-10T11:48:25.757289+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.386054+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.386054+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.391834+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.391834+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.396561+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.396561+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.397935+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:26 vm05 bash[65415]: audit 2026-03-10T11:48:26.397935+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: cluster 2026-03-10T11:48:25.433852+0000 mgr.y (mgr.44107) 114 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: cluster 2026-03-10T11:48:25.433852+0000 mgr.y (mgr.44107) 114 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: cluster 2026-03-10T11:48:25.744600+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: cluster 2026-03-10T11:48:25.744600+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: cluster 2026-03-10T11:48:25.757289+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: cluster 2026-03-10T11:48:25.757289+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.386054+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.386054+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.391834+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.391834+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.396561+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.396561+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.397935+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:26 vm05 bash[68966]: audit 2026-03-10T11:48:26.397935+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:27.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: cluster 2026-03-10T11:48:25.433852+0000 mgr.y (mgr.44107) 114 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: cluster 2026-03-10T11:48:25.433852+0000 mgr.y (mgr.44107) 114 : cluster [DBG] pgmap v28: 161 pgs: 161 active+clean; 457 KiB data, 103 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: cluster 2026-03-10T11:48:25.744600+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: cluster 2026-03-10T11:48:25.744600+0000 mon.a (mon.0) 165 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: cluster 2026-03-10T11:48:25.757289+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: cluster 2026-03-10T11:48:25.757289+0000 mon.a (mon.0) 166 : cluster [DBG] osdmap e100: 8 total, 7 up, 8 in 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.386054+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.386054+0000 mon.a (mon.0) 167 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.391834+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.391834+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.396561+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.396561+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.397935+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:27.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:26 vm07 bash[46158]: audit 2026-03-10T11:48:26.397935+0000 mon.c (mon.1) 133 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:27.751 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:48:27.751 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:27.752 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:27.752 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T11:48:27.752 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-1c665137-ca0d-4f5c-8903-e8fb7189bf22/osd-block-0e62b553-78b1-4fbe-870e-d68c1967e6be --path /var/lib/ceph/osd/ceph-3 --no-mon-config 2026-03-10T11:48:28.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:27 vm05 bash[68966]: cluster 2026-03-10T11:48:26.763699+0000 mon.a (mon.0) 170 : cluster [DBG] osdmap e101: 8 total, 7 up, 8 in 2026-03-10T11:48:28.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:27 vm05 bash[68966]: cluster 2026-03-10T11:48:26.763699+0000 mon.a (mon.0) 170 : cluster [DBG] osdmap e101: 8 total, 7 up, 8 in 2026-03-10T11:48:28.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:27 vm05 bash[65415]: cluster 2026-03-10T11:48:26.763699+0000 mon.a (mon.0) 170 : cluster [DBG] osdmap e101: 8 total, 7 up, 8 in 2026-03-10T11:48:28.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:27 vm05 bash[65415]: cluster 2026-03-10T11:48:26.763699+0000 mon.a (mon.0) 170 : cluster [DBG] osdmap e101: 8 total, 7 up, 8 in 2026-03-10T11:48:28.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/ln -snf /dev/ceph-1c665137-ca0d-4f5c-8903-e8fb7189bf22/osd-block-0e62b553-78b1-4fbe-870e-d68c1967e6be /var/lib/ceph/osd/ceph-3/block 2026-03-10T11:48:28.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block 2026-03-10T11:48:28.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-10T11:48:28.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-10T11:48:28.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:27 vm05 bash[75504]: --> ceph-volume lvm activate successful for osd ID: 3 2026-03-10T11:48:28.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:27 vm07 bash[46158]: cluster 2026-03-10T11:48:26.763699+0000 mon.a (mon.0) 170 : cluster [DBG] osdmap e101: 8 total, 7 up, 8 in 2026-03-10T11:48:28.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:27 vm07 bash[46158]: cluster 2026-03-10T11:48:26.763699+0000 mon.a (mon.0) 170 : cluster [DBG] osdmap e101: 8 total, 7 up, 8 in 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:28 vm05 bash[68966]: cluster 2026-03-10T11:48:27.434213+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v31: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:28 vm05 bash[68966]: cluster 2026-03-10T11:48:27.434213+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v31: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:28 vm05 bash[68966]: cluster 2026-03-10T11:48:27.761648+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 18 pgs inactive, 27 pgs peering (PG_AVAILABILITY) 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:28 vm05 bash[68966]: cluster 2026-03-10T11:48:27.761648+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 18 pgs inactive, 27 pgs peering (PG_AVAILABILITY) 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:48:28] "GET /metrics HTTP/1.1" 200 37589 "" "Prometheus/2.51.0" 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:28 vm05 bash[65415]: cluster 2026-03-10T11:48:27.434213+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v31: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:28 vm05 bash[65415]: cluster 2026-03-10T11:48:27.434213+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v31: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:28 vm05 bash[65415]: cluster 2026-03-10T11:48:27.761648+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 18 pgs inactive, 27 pgs peering (PG_AVAILABILITY) 2026-03-10T11:48:29.026 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:28 vm05 bash[65415]: cluster 2026-03-10T11:48:27.761648+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 18 pgs inactive, 27 pgs peering (PG_AVAILABILITY) 2026-03-10T11:48:29.027 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:28 vm05 bash[75861]: debug 2026-03-10T11:48:28.612+0000 7f4c833a0740 -1 Falling back to public interface 2026-03-10T11:48:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:28 vm07 bash[46158]: cluster 2026-03-10T11:48:27.434213+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v31: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:28 vm07 bash[46158]: cluster 2026-03-10T11:48:27.434213+0000 mgr.y (mgr.44107) 115 : cluster [DBG] pgmap v31: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:28 vm07 bash[46158]: cluster 2026-03-10T11:48:27.761648+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 18 pgs inactive, 27 pgs peering (PG_AVAILABILITY) 2026-03-10T11:48:29.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:28 vm07 bash[46158]: cluster 2026-03-10T11:48:27.761648+0000 mon.a (mon.0) 171 : cluster [WRN] Health check failed: Reduced data availability: 18 pgs inactive, 27 pgs peering (PG_AVAILABILITY) 2026-03-10T11:48:30.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:29 vm05 bash[68966]: audit 2026-03-10T11:48:29.030008+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:30.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:29 vm05 bash[68966]: audit 2026-03-10T11:48:29.030008+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:30.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:29 vm05 bash[65415]: audit 2026-03-10T11:48:29.030008+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:30.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:29 vm05 bash[65415]: audit 2026-03-10T11:48:29.030008+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:30.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:29 vm05 bash[75861]: debug 2026-03-10T11:48:29.832+0000 7f4c833a0740 -1 osd.3 0 read_superblock omap replica is missing. 2026-03-10T11:48:30.091 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:29 vm05 bash[75861]: debug 2026-03-10T11:48:29.848+0000 7f4c833a0740 -1 osd.3 99 log_to_monitors true 2026-03-10T11:48:30.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:29 vm07 bash[46158]: audit 2026-03-10T11:48:29.030008+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:30.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:29 vm07 bash[46158]: audit 2026-03-10T11:48:29.030008+0000 mgr.y (mgr.44107) 116 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:30 vm05 bash[68966]: cluster 2026-03-10T11:48:29.434595+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:30 vm05 bash[68966]: cluster 2026-03-10T11:48:29.434595+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:30 vm05 bash[68966]: audit 2026-03-10T11:48:29.855072+0000 mon.a (mon.0) 172 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:30 vm05 bash[68966]: audit 2026-03-10T11:48:29.855072+0000 mon.a (mon.0) 172 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:30 vm05 bash[65415]: cluster 2026-03-10T11:48:29.434595+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:30 vm05 bash[65415]: cluster 2026-03-10T11:48:29.434595+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:30 vm05 bash[65415]: audit 2026-03-10T11:48:29.855072+0000 mon.a (mon.0) 172 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:48:31.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:30 vm05 bash[65415]: audit 2026-03-10T11:48:29.855072+0000 mon.a (mon.0) 172 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:48:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:30 vm07 bash[46158]: cluster 2026-03-10T11:48:29.434595+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:30 vm07 bash[46158]: cluster 2026-03-10T11:48:29.434595+0000 mgr.y (mgr.44107) 117 : cluster [DBG] pgmap v32: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 104 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:30 vm07 bash[46158]: audit 2026-03-10T11:48:29.855072+0000 mon.a (mon.0) 172 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:48:31.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:30 vm07 bash[46158]: audit 2026-03-10T11:48:29.855072+0000 mon.a (mon.0) 172 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-10T11:48:32.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:31 vm07 bash[46158]: audit 2026-03-10T11:48:30.803557+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:48:32.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:31 vm07 bash[46158]: audit 2026-03-10T11:48:30.803557+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:48:32.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:31 vm07 bash[46158]: cluster 2026-03-10T11:48:30.805063+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T11:48:32.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:31 vm07 bash[46158]: cluster 2026-03-10T11:48:30.805063+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T11:48:32.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:31 vm07 bash[46158]: audit 2026-03-10T11:48:30.805454+0000 mon.a (mon.0) 175 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:32.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:31 vm07 bash[46158]: audit 2026-03-10T11:48:30.805454+0000 mon.a (mon.0) 175 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:31 vm05 bash[68966]: audit 2026-03-10T11:48:30.803557+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:31 vm05 bash[68966]: audit 2026-03-10T11:48:30.803557+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:31 vm05 bash[68966]: cluster 2026-03-10T11:48:30.805063+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:31 vm05 bash[68966]: cluster 2026-03-10T11:48:30.805063+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:31 vm05 bash[68966]: audit 2026-03-10T11:48:30.805454+0000 mon.a (mon.0) 175 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:31 vm05 bash[68966]: audit 2026-03-10T11:48:30.805454+0000 mon.a (mon.0) 175 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:31 vm05 bash[65415]: audit 2026-03-10T11:48:30.803557+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:31 vm05 bash[65415]: audit 2026-03-10T11:48:30.803557+0000 mon.a (mon.0) 173 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:31 vm05 bash[65415]: cluster 2026-03-10T11:48:30.805063+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:31 vm05 bash[65415]: cluster 2026-03-10T11:48:30.805063+0000 mon.a (mon.0) 174 : cluster [DBG] osdmap e102: 8 total, 7 up, 8 in 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:31 vm05 bash[65415]: audit 2026-03-10T11:48:30.805454+0000 mon.a (mon.0) 175 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:32.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:31 vm05 bash[65415]: audit 2026-03-10T11:48:30.805454+0000 mon.a (mon.0) 175 : audit [INF] from='osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010]' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:32.341 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:31 vm05 bash[75861]: debug 2026-03-10T11:48:31.888+0000 7f4c7a94a640 -1 osd.3 99 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:48:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:32 vm07 bash[46158]: cluster 2026-03-10T11:48:31.434919+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v34: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:33.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:32 vm07 bash[46158]: cluster 2026-03-10T11:48:31.434919+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v34: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:33.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:32 vm05 bash[68966]: cluster 2026-03-10T11:48:31.434919+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v34: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:33.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:32 vm05 bash[68966]: cluster 2026-03-10T11:48:31.434919+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v34: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:33.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:32 vm05 bash[65415]: cluster 2026-03-10T11:48:31.434919+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v34: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:33.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:32 vm05 bash[65415]: cluster 2026-03-10T11:48:31.434919+0000 mgr.y (mgr.44107) 118 : cluster [DBG] pgmap v34: 161 pgs: 66 peering, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:48:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:31.878111+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 18526.734897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:48:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:31.878111+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 18526.734897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:48:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:32.851010+0000 mon.a (mon.0) 176 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:32.851010+0000 mon.a (mon.0) 176 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:34.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:32.860188+0000 mon.a (mon.0) 177 : cluster [INF] osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010] boot 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:32.860188+0000 mon.a (mon.0) 177 : cluster [INF] osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010] boot 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:32.860261+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: cluster 2026-03-10T11:48:32.860261+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: audit 2026-03-10T11:48:32.865717+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: audit 2026-03-10T11:48:32.865717+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: audit 2026-03-10T11:48:33.304449+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: audit 2026-03-10T11:48:33.304449+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: audit 2026-03-10T11:48:33.310156+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:33 vm07 bash[46158]: audit 2026-03-10T11:48:33.310156+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:31.878111+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 18526.734897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:31.878111+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 18526.734897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:32.851010+0000 mon.a (mon.0) 176 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:32.851010+0000 mon.a (mon.0) 176 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:32.860188+0000 mon.a (mon.0) 177 : cluster [INF] osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010] boot 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:32.860188+0000 mon.a (mon.0) 177 : cluster [INF] osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010] boot 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:32.860261+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: cluster 2026-03-10T11:48:32.860261+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: audit 2026-03-10T11:48:32.865717+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: audit 2026-03-10T11:48:32.865717+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: audit 2026-03-10T11:48:33.304449+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: audit 2026-03-10T11:48:33.304449+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: audit 2026-03-10T11:48:33.310156+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:33 vm05 bash[68966]: audit 2026-03-10T11:48:33.310156+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:31.878111+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 18526.734897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:31.878111+0000 osd.3 (osd.3) 1 : cluster [WRN] OSD bench result of 18526.734897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:32.851010+0000 mon.a (mon.0) 176 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:32.851010+0000 mon.a (mon.0) 176 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:32.860188+0000 mon.a (mon.0) 177 : cluster [INF] osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010] boot 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:32.860188+0000 mon.a (mon.0) 177 : cluster [INF] osd.3 [v2:192.168.123.105:6826/2284648010,v1:192.168.123.105:6827/2284648010] boot 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:32.860261+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: cluster 2026-03-10T11:48:32.860261+0000 mon.a (mon.0) 178 : cluster [DBG] osdmap e103: 8 total, 8 up, 8 in 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: audit 2026-03-10T11:48:32.865717+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: audit 2026-03-10T11:48:32.865717+0000 mon.c (mon.1) 134 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: audit 2026-03-10T11:48:33.304449+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: audit 2026-03-10T11:48:33.304449+0000 mon.a (mon.0) 179 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: audit 2026-03-10T11:48:33.310156+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:34.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:33 vm05 bash[65415]: audit 2026-03-10T11:48:33.310156+0000 mon.a (mon.0) 180 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:33.435279+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v36: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:33.435279+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v36: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:33.875433+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T11:48:35.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:33.875433+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: audit 2026-03-10T11:48:33.955390+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: audit 2026-03-10T11:48:33.955390+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: audit 2026-03-10T11:48:33.967394+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: audit 2026-03-10T11:48:33.967394+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:34.309099+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:34.309099+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:34.309113+0000 mon.a (mon.0) 185 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 18 pgs inactive, 27 pgs peering) 2026-03-10T11:48:35.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:34 vm07 bash[46158]: cluster 2026-03-10T11:48:34.309113+0000 mon.a (mon.0) 185 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 18 pgs inactive, 27 pgs peering) 2026-03-10T11:48:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:33.435279+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v36: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:33.435279+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v36: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:33.875433+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:33.875433+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: audit 2026-03-10T11:48:33.955390+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: audit 2026-03-10T11:48:33.955390+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: audit 2026-03-10T11:48:33.967394+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: audit 2026-03-10T11:48:33.967394+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:34.309099+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:34.309099+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:34.309113+0000 mon.a (mon.0) 185 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 18 pgs inactive, 27 pgs peering) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:34 vm05 bash[65415]: cluster 2026-03-10T11:48:34.309113+0000 mon.a (mon.0) 185 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 18 pgs inactive, 27 pgs peering) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:33.435279+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v36: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:33.435279+0000 mgr.y (mgr.44107) 119 : cluster [DBG] pgmap v36: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:33.875433+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:33.875433+0000 mon.a (mon.0) 181 : cluster [DBG] osdmap e104: 8 total, 8 up, 8 in 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: audit 2026-03-10T11:48:33.955390+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: audit 2026-03-10T11:48:33.955390+0000 mon.a (mon.0) 182 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: audit 2026-03-10T11:48:33.967394+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: audit 2026-03-10T11:48:33.967394+0000 mon.a (mon.0) 183 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:34.309099+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:34.309099+0000 mon.a (mon.0) 184 : cluster [WRN] Health check failed: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:34.309113+0000 mon.a (mon.0) 185 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 18 pgs inactive, 27 pgs peering) 2026-03-10T11:48:35.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:34 vm05 bash[68966]: cluster 2026-03-10T11:48:34.309113+0000 mon.a (mon.0) 185 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 18 pgs inactive, 27 pgs peering) 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:36 vm05 bash[65415]: cluster 2026-03-10T11:48:35.435594+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:36 vm05 bash[65415]: cluster 2026-03-10T11:48:35.435594+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:36 vm05 bash[65415]: audit 2026-03-10T11:48:35.480669+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:36 vm05 bash[65415]: audit 2026-03-10T11:48:35.480669+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:36 vm05 bash[65415]: audit 2026-03-10T11:48:35.482085+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:36 vm05 bash[65415]: audit 2026-03-10T11:48:35.482085+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:36 vm05 bash[68966]: cluster 2026-03-10T11:48:35.435594+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:36 vm05 bash[68966]: cluster 2026-03-10T11:48:35.435594+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:36 vm05 bash[68966]: audit 2026-03-10T11:48:35.480669+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:36 vm05 bash[68966]: audit 2026-03-10T11:48:35.480669+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:36 vm05 bash[68966]: audit 2026-03-10T11:48:35.482085+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:36.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:36 vm05 bash[68966]: audit 2026-03-10T11:48:35.482085+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:36.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:36 vm07 bash[46158]: cluster 2026-03-10T11:48:35.435594+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:36 vm07 bash[46158]: cluster 2026-03-10T11:48:35.435594+0000 mgr.y (mgr.44107) 120 : cluster [DBG] pgmap v38: 161 pgs: 41 active+undersized, 25 active+undersized+degraded, 95 active+clean; 457 KiB data, 167 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 82/627 objects degraded (13.078%) 2026-03-10T11:48:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:36 vm07 bash[46158]: audit 2026-03-10T11:48:35.480669+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:36 vm07 bash[46158]: audit 2026-03-10T11:48:35.480669+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:36 vm07 bash[46158]: audit 2026-03-10T11:48:35.482085+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:36 vm07 bash[46158]: audit 2026-03-10T11:48:35.482085+0000 mon.c (mon.1) 135 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:37 vm05 bash[65415]: cluster 2026-03-10T11:48:37.481264+0000 mon.a (mon.0) 187 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded) 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:37 vm05 bash[65415]: cluster 2026-03-10T11:48:37.481264+0000 mon.a (mon.0) 187 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded) 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:37 vm05 bash[65415]: cluster 2026-03-10T11:48:37.481290+0000 mon.a (mon.0) 188 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:37 vm05 bash[65415]: cluster 2026-03-10T11:48:37.481290+0000 mon.a (mon.0) 188 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:37 vm05 bash[68966]: cluster 2026-03-10T11:48:37.481264+0000 mon.a (mon.0) 187 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded) 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:37 vm05 bash[68966]: cluster 2026-03-10T11:48:37.481264+0000 mon.a (mon.0) 187 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded) 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:37 vm05 bash[68966]: cluster 2026-03-10T11:48:37.481290+0000 mon.a (mon.0) 188 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:37.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:37 vm05 bash[68966]: cluster 2026-03-10T11:48:37.481290+0000 mon.a (mon.0) 188 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:37 vm07 bash[46158]: cluster 2026-03-10T11:48:37.481264+0000 mon.a (mon.0) 187 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded) 2026-03-10T11:48:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:37 vm07 bash[46158]: cluster 2026-03-10T11:48:37.481264+0000 mon.a (mon.0) 187 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 82/627 objects degraded (13.078%), 25 pgs degraded) 2026-03-10T11:48:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:37 vm07 bash[46158]: cluster 2026-03-10T11:48:37.481290+0000 mon.a (mon.0) 188 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:37.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:37 vm07 bash[46158]: cluster 2026-03-10T11:48:37.481290+0000 mon.a (mon.0) 188 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:38.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:38 vm05 bash[68966]: cluster 2026-03-10T11:48:37.436130+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 308 B/s rd, 0 op/s 2026-03-10T11:48:38.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:38 vm05 bash[68966]: cluster 2026-03-10T11:48:37.436130+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 308 B/s rd, 0 op/s 2026-03-10T11:48:38.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:38 vm05 bash[65415]: cluster 2026-03-10T11:48:37.436130+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 308 B/s rd, 0 op/s 2026-03-10T11:48:38.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:38 vm05 bash[65415]: cluster 2026-03-10T11:48:37.436130+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 308 B/s rd, 0 op/s 2026-03-10T11:48:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:38 vm07 bash[46158]: cluster 2026-03-10T11:48:37.436130+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 308 B/s rd, 0 op/s 2026-03-10T11:48:38.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:38 vm07 bash[46158]: cluster 2026-03-10T11:48:37.436130+0000 mgr.y (mgr.44107) 121 : cluster [DBG] pgmap v39: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 308 B/s rd, 0 op/s 2026-03-10T11:48:39.340 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:48:38] "GET /metrics HTTP/1.1" 200 37514 "" "Prometheus/2.51.0" 2026-03-10T11:48:39.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:39 vm05 bash[65415]: audit 2026-03-10T11:48:39.039979+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:39.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:39 vm05 bash[65415]: audit 2026-03-10T11:48:39.039979+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:39.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:39 vm05 bash[68966]: audit 2026-03-10T11:48:39.039979+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:39.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:39 vm05 bash[68966]: audit 2026-03-10T11:48:39.039979+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:39.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:39 vm07 bash[46158]: audit 2026-03-10T11:48:39.039979+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:39.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:39 vm07 bash[46158]: audit 2026-03-10T11:48:39.039979+0000 mgr.y (mgr.44107) 122 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: cluster 2026-03-10T11:48:39.436431+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: cluster 2026-03-10T11:48:39.436431+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.635368+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.635368+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.643766+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.643766+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.647676+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.647676+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.648690+0000 mon.c (mon.1) 137 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.648690+0000 mon.c (mon.1) 137 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.653624+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.653624+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.703023+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.703023+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.704950+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.704950+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.706368+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.706368+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.707670+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.707670+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.709162+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.709162+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.709738+0000 mgr.y (mgr.44107) 124 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:39.709738+0000 mgr.y (mgr.44107) 124 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: cephadm 2026-03-10T11:48:39.710435+0000 mgr.y (mgr.44107) 125 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: cephadm 2026-03-10T11:48:39.710435+0000 mgr.y (mgr.44107) 125 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:40.301940+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:40.301940+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:40.306796+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:40.306796+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:40.307820+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:40 vm07 bash[46158]: audit 2026-03-10T11:48:40.307820+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: cluster 2026-03-10T11:48:39.436431+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: cluster 2026-03-10T11:48:39.436431+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.635368+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.635368+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.643766+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.643766+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.647676+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.647676+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.648690+0000 mon.c (mon.1) 137 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.648690+0000 mon.c (mon.1) 137 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.653624+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.653624+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.703023+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.703023+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.704950+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.704950+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.706368+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.706368+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.707670+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.707670+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.709162+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.709162+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.709738+0000 mgr.y (mgr.44107) 124 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.948 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:39.709738+0000 mgr.y (mgr.44107) 124 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: cephadm 2026-03-10T11:48:39.710435+0000 mgr.y (mgr.44107) 125 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: cephadm 2026-03-10T11:48:39.710435+0000 mgr.y (mgr.44107) 125 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:40.301940+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:40.301940+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:40.306796+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:40.306796+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:40.307820+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:40 vm05 bash[65415]: audit 2026-03-10T11:48:40.307820+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: cluster 2026-03-10T11:48:39.436431+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: cluster 2026-03-10T11:48:39.436431+0000 mgr.y (mgr.44107) 123 : cluster [DBG] pgmap v40: 161 pgs: 161 active+clean; 457 KiB data, 525 MiB used, 159 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.635368+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.635368+0000 mon.a (mon.0) 189 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.643766+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.643766+0000 mon.a (mon.0) 190 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.647676+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.647676+0000 mon.c (mon.1) 136 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.648690+0000 mon.c (mon.1) 137 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.648690+0000 mon.c (mon.1) 137 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.653624+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.653624+0000 mon.a (mon.0) 191 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.703023+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.703023+0000 mon.c (mon.1) 138 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.704950+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.704950+0000 mon.c (mon.1) 139 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.706368+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.706368+0000 mon.c (mon.1) 140 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.707670+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.707670+0000 mon.c (mon.1) 141 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.709162+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.709162+0000 mon.c (mon.1) 142 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.709738+0000 mgr.y (mgr.44107) 124 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:39.709738+0000 mgr.y (mgr.44107) 124 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["2"], "max": 16}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: cephadm 2026-03-10T11:48:39.710435+0000 mgr.y (mgr.44107) 125 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: cephadm 2026-03-10T11:48:39.710435+0000 mgr.y (mgr.44107) 125 : cephadm [INF] Upgrade: osd.2 is safe to restart 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:40.301940+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:40.301940+0000 mon.a (mon.0) 192 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:40.306796+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:40.306796+0000 mon.c (mon.1) 143 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:40.307820+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:40.949 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:40 vm05 bash[68966]: audit 2026-03-10T11:48:40.307820+0000 mon.c (mon.1) 144 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:41.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: Stopping Ceph osd.2 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:41 vm05 bash[31446]: debug 2026-03-10T11:48:41.440+0000 7f776b4f7700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:41 vm05 bash[31446]: debug 2026-03-10T11:48:41.440+0000 7f776b4f7700 -1 osd.2 104 *** Got signal Terminated *** 2026-03-10T11:48:41.591 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:41 vm05 bash[31446]: debug 2026-03-10T11:48:41.440+0000 7f776b4f7700 -1 osd.2 104 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:48:41.591 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:48:41 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:41.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:41 vm07 bash[46158]: cephadm 2026-03-10T11:48:40.297114+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T11:48:41.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:41 vm07 bash[46158]: cephadm 2026-03-10T11:48:40.297114+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T11:48:41.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:41 vm07 bash[46158]: cephadm 2026-03-10T11:48:40.309397+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:48:41.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:41 vm07 bash[46158]: cephadm 2026-03-10T11:48:40.309397+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:48:41.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:41 vm07 bash[46158]: cluster 2026-03-10T11:48:41.446203+0000 mon.a (mon.0) 193 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T11:48:41.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:41 vm07 bash[46158]: cluster 2026-03-10T11:48:41.446203+0000 mon.a (mon.0) 193 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 bash[65415]: cephadm 2026-03-10T11:48:40.297114+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 bash[65415]: cephadm 2026-03-10T11:48:40.297114+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 bash[65415]: cephadm 2026-03-10T11:48:40.309397+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 bash[65415]: cephadm 2026-03-10T11:48:40.309397+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 bash[65415]: cluster 2026-03-10T11:48:41.446203+0000 mon.a (mon.0) 193 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:41 vm05 bash[65415]: cluster 2026-03-10T11:48:41.446203+0000 mon.a (mon.0) 193 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 bash[68966]: cephadm 2026-03-10T11:48:40.297114+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 bash[68966]: cephadm 2026-03-10T11:48:40.297114+0000 mgr.y (mgr.44107) 126 : cephadm [INF] Upgrade: Updating osd.2 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 bash[68966]: cephadm 2026-03-10T11:48:40.309397+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 bash[68966]: cephadm 2026-03-10T11:48:40.309397+0000 mgr.y (mgr.44107) 127 : cephadm [INF] Deploying daemon osd.2 on vm05 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 bash[68966]: cluster 2026-03-10T11:48:41.446203+0000 mon.a (mon.0) 193 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T11:48:42.005 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:41 vm05 bash[68966]: cluster 2026-03-10T11:48:41.446203+0000 mon.a (mon.0) 193 : cluster [INF] osd.2 marked itself down and dead 2026-03-10T11:48:42.005 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:41 vm05 bash[79813]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-2 2026-03-10T11:48:42.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.2.service: Deactivated successfully. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: Stopped Ceph osd.2 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:48:42.341 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:42 vm05 systemd[1]: Started Ceph osd.2 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:48:42.663 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:42 vm05 bash[80027]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:42.663 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:42 vm05 bash[80027]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:42.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: cluster 2026-03-10T11:48:41.436813+0000 mgr.y (mgr.44107) 128 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 238 B/s rd, 0 op/s 2026-03-10T11:48:42.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: cluster 2026-03-10T11:48:41.436813+0000 mgr.y (mgr.44107) 128 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 238 B/s rd, 0 op/s 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: cluster 2026-03-10T11:48:41.637674+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: cluster 2026-03-10T11:48:41.637674+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: cluster 2026-03-10T11:48:41.657511+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: cluster 2026-03-10T11:48:41.657511+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.285592+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.285592+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.292411+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.292411+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.297748+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.297748+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.299825+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:42 vm07 bash[46158]: audit 2026-03-10T11:48:42.299825+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:43.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: cluster 2026-03-10T11:48:41.436813+0000 mgr.y (mgr.44107) 128 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 238 B/s rd, 0 op/s 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: cluster 2026-03-10T11:48:41.436813+0000 mgr.y (mgr.44107) 128 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 238 B/s rd, 0 op/s 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: cluster 2026-03-10T11:48:41.637674+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: cluster 2026-03-10T11:48:41.637674+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: cluster 2026-03-10T11:48:41.657511+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: cluster 2026-03-10T11:48:41.657511+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.285592+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.285592+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.292411+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.292411+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.297748+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.297748+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.299825+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:42 vm05 bash[65415]: audit 2026-03-10T11:48:42.299825+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: cluster 2026-03-10T11:48:41.436813+0000 mgr.y (mgr.44107) 128 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 238 B/s rd, 0 op/s 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: cluster 2026-03-10T11:48:41.436813+0000 mgr.y (mgr.44107) 128 : cluster [DBG] pgmap v41: 161 pgs: 161 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 238 B/s rd, 0 op/s 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: cluster 2026-03-10T11:48:41.637674+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: cluster 2026-03-10T11:48:41.637674+0000 mon.a (mon.0) 194 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: cluster 2026-03-10T11:48:41.657511+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: cluster 2026-03-10T11:48:41.657511+0000 mon.a (mon.0) 195 : cluster [DBG] osdmap e105: 8 total, 7 up, 8 in 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.285592+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.285592+0000 mon.a (mon.0) 196 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.292411+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.292411+0000 mon.a (mon.0) 197 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.297748+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.297748+0000 mon.a (mon.0) 198 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.299825+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:43.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:42 vm05 bash[68966]: audit 2026-03-10T11:48:42.299825+0000 mon.c (mon.1) 145 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:43.670 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:48:43.670 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:43.670 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:48:43.670 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T11:48:43.670 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ad1fcea3-1a63-4099-a0aa-98d6ef32f7e6/osd-block-58079681-6944-4372-ab7d-0aa5717818bf --path /var/lib/ceph/osd/ceph-2 --no-mon-config 2026-03-10T11:48:43.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:43 vm07 bash[46158]: cluster 2026-03-10T11:48:42.651289+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e106: 8 total, 7 up, 8 in 2026-03-10T11:48:43.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:43 vm07 bash[46158]: cluster 2026-03-10T11:48:42.651289+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e106: 8 total, 7 up, 8 in 2026-03-10T11:48:44.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:43 vm05 bash[65415]: cluster 2026-03-10T11:48:42.651289+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e106: 8 total, 7 up, 8 in 2026-03-10T11:48:44.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:43 vm05 bash[65415]: cluster 2026-03-10T11:48:42.651289+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e106: 8 total, 7 up, 8 in 2026-03-10T11:48:44.091 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/ln -snf /dev/ceph-ad1fcea3-1a63-4099-a0aa-98d6ef32f7e6/osd-block-58079681-6944-4372-ab7d-0aa5717818bf /var/lib/ceph/osd/ceph-2/block 2026-03-10T11:48:44.091 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block 2026-03-10T11:48:44.091 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T11:48:44.091 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-10T11:48:44.091 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:43 vm05 bash[80027]: --> ceph-volume lvm activate successful for osd ID: 2 2026-03-10T11:48:44.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:43 vm05 bash[68966]: cluster 2026-03-10T11:48:42.651289+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e106: 8 total, 7 up, 8 in 2026-03-10T11:48:44.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:43 vm05 bash[68966]: cluster 2026-03-10T11:48:42.651289+0000 mon.a (mon.0) 199 : cluster [DBG] osdmap e106: 8 total, 7 up, 8 in 2026-03-10T11:48:44.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:44 vm05 bash[65415]: cluster 2026-03-10T11:48:43.437104+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:44.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:44 vm05 bash[65415]: cluster 2026-03-10T11:48:43.437104+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:44.841 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:44 vm05 bash[80388]: debug 2026-03-10T11:48:44.580+0000 7f373c5ab740 -1 Falling back to public interface 2026-03-10T11:48:44.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:44 vm05 bash[68966]: cluster 2026-03-10T11:48:43.437104+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:44.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:44 vm05 bash[68966]: cluster 2026-03-10T11:48:43.437104+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:44.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:44 vm07 bash[46158]: cluster 2026-03-10T11:48:43.437104+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:44 vm07 bash[46158]: cluster 2026-03-10T11:48:43.437104+0000 mgr.y (mgr.44107) 129 : cluster [DBG] pgmap v44: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 255 B/s rd, 0 op/s 2026-03-10T11:48:46.090 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:45 vm05 bash[80388]: debug 2026-03-10T11:48:45.800+0000 7f373c5ab740 -1 osd.2 0 read_superblock omap replica is missing. 2026-03-10T11:48:46.091 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:45 vm05 bash[80388]: debug 2026-03-10T11:48:45.812+0000 7f373c5ab740 -1 osd.2 104 log_to_monitors true 2026-03-10T11:48:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:46 vm05 bash[68966]: cluster 2026-03-10T11:48:45.437386+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v45: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:46 vm05 bash[68966]: cluster 2026-03-10T11:48:45.437386+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v45: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:46 vm05 bash[68966]: audit 2026-03-10T11:48:45.484652+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:46 vm05 bash[68966]: audit 2026-03-10T11:48:45.484652+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:46 vm05 bash[68966]: audit 2026-03-10T11:48:45.820114+0000 mon.a (mon.0) 201 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:48:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:46 vm05 bash[68966]: audit 2026-03-10T11:48:45.820114+0000 mon.a (mon.0) 201 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:48:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:46 vm05 bash[65415]: cluster 2026-03-10T11:48:45.437386+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v45: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:46 vm05 bash[65415]: cluster 2026-03-10T11:48:45.437386+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v45: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:46 vm05 bash[65415]: audit 2026-03-10T11:48:45.484652+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:46 vm05 bash[65415]: audit 2026-03-10T11:48:45.484652+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:46 vm05 bash[65415]: audit 2026-03-10T11:48:45.820114+0000 mon.a (mon.0) 201 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:48:46.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:46 vm05 bash[65415]: audit 2026-03-10T11:48:45.820114+0000 mon.a (mon.0) 201 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:48:46.841 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:48:46 vm05 bash[80388]: debug 2026-03-10T11:48:46.704+0000 7f3734356640 -1 osd.2 104 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:48:46.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:46 vm07 bash[46158]: cluster 2026-03-10T11:48:45.437386+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v45: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:46.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:46 vm07 bash[46158]: cluster 2026-03-10T11:48:45.437386+0000 mgr.y (mgr.44107) 130 : cluster [DBG] pgmap v45: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 125 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:46.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:46 vm07 bash[46158]: audit 2026-03-10T11:48:45.484652+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:46.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:46 vm07 bash[46158]: audit 2026-03-10T11:48:45.484652+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:46.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:46 vm07 bash[46158]: audit 2026-03-10T11:48:45.820114+0000 mon.a (mon.0) 201 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:48:46.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:46 vm07 bash[46158]: audit 2026-03-10T11:48:45.820114+0000 mon.a (mon.0) 201 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:47 vm05 bash[68966]: audit 2026-03-10T11:48:46.684233+0000 mon.a (mon.0) 202 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:47 vm05 bash[68966]: audit 2026-03-10T11:48:46.684233+0000 mon.a (mon.0) 202 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:47 vm05 bash[68966]: cluster 2026-03-10T11:48:46.692841+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:47 vm05 bash[68966]: cluster 2026-03-10T11:48:46.692841+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:47 vm05 bash[68966]: audit 2026-03-10T11:48:46.692985+0000 mon.a (mon.0) 204 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:47 vm05 bash[68966]: audit 2026-03-10T11:48:46.692985+0000 mon.a (mon.0) 204 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:47 vm05 bash[65415]: audit 2026-03-10T11:48:46.684233+0000 mon.a (mon.0) 202 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:47 vm05 bash[65415]: audit 2026-03-10T11:48:46.684233+0000 mon.a (mon.0) 202 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:47 vm05 bash[65415]: cluster 2026-03-10T11:48:46.692841+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:47 vm05 bash[65415]: cluster 2026-03-10T11:48:46.692841+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:47 vm05 bash[65415]: audit 2026-03-10T11:48:46.692985+0000 mon.a (mon.0) 204 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:48.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:47 vm05 bash[65415]: audit 2026-03-10T11:48:46.692985+0000 mon.a (mon.0) 204 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:48.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:47 vm07 bash[46158]: audit 2026-03-10T11:48:46.684233+0000 mon.a (mon.0) 202 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:48:48.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:47 vm07 bash[46158]: audit 2026-03-10T11:48:46.684233+0000 mon.a (mon.0) 202 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-10T11:48:48.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:47 vm07 bash[46158]: cluster 2026-03-10T11:48:46.692841+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T11:48:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:47 vm07 bash[46158]: cluster 2026-03-10T11:48:46.692841+0000 mon.a (mon.0) 203 : cluster [DBG] osdmap e107: 8 total, 7 up, 8 in 2026-03-10T11:48:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:47 vm07 bash[46158]: audit 2026-03-10T11:48:46.692985+0000 mon.a (mon.0) 204 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:47 vm07 bash[46158]: audit 2026-03-10T11:48:46.692985+0000 mon.a (mon.0) 204 : audit [INF] from='osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.437704+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v47: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.437704+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v47: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.684311+0000 mon.a (mon.0) 205 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.684311+0000 mon.a (mon.0) 205 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.684328+0000 mon.a (mon.0) 206 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.684328+0000 mon.a (mon.0) 206 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.685374+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 59/627 objects degraded (9.410%), 16 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.685374+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 59/627 objects degraded (9.410%), 16 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.689390+0000 mon.a (mon.0) 208 : cluster [INF] osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600] boot 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.689390+0000 mon.a (mon.0) 208 : cluster [INF] osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600] boot 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.689462+0000 mon.a (mon.0) 209 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:47.689462+0000 mon.a (mon.0) 209 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: audit 2026-03-10T11:48:47.695678+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: audit 2026-03-10T11:48:47.695678+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:48.692599+0000 mon.a (mon.0) 210 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:48 vm05 bash[68966]: cluster 2026-03-10T11:48:48.692599+0000 mon.a (mon.0) 210 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.437704+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v47: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.437704+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v47: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.684311+0000 mon.a (mon.0) 205 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.684311+0000 mon.a (mon.0) 205 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.684328+0000 mon.a (mon.0) 206 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.684328+0000 mon.a (mon.0) 206 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.685374+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 59/627 objects degraded (9.410%), 16 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.685374+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 59/627 objects degraded (9.410%), 16 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.689390+0000 mon.a (mon.0) 208 : cluster [INF] osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600] boot 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.689390+0000 mon.a (mon.0) 208 : cluster [INF] osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600] boot 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.689462+0000 mon.a (mon.0) 209 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:47.689462+0000 mon.a (mon.0) 209 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: audit 2026-03-10T11:48:47.695678+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: audit 2026-03-10T11:48:47.695678+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:48.692599+0000 mon.a (mon.0) 210 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:48 vm05 bash[65415]: cluster 2026-03-10T11:48:48.692599+0000 mon.a (mon.0) 210 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T11:48:49.044 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:48:48] "GET /metrics HTTP/1.1" 200 37514 "" "Prometheus/2.51.0" 2026-03-10T11:48:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.437704+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v47: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:49.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.437704+0000 mgr.y (mgr.44107) 131 : cluster [DBG] pgmap v47: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.684311+0000 mon.a (mon.0) 205 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.684311+0000 mon.a (mon.0) 205 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.684328+0000 mon.a (mon.0) 206 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.684328+0000 mon.a (mon.0) 206 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.685374+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 59/627 objects degraded (9.410%), 16 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.685374+0000 mon.a (mon.0) 207 : cluster [WRN] Health check failed: Degraded data redundancy: 59/627 objects degraded (9.410%), 16 pgs degraded (PG_DEGRADED) 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.689390+0000 mon.a (mon.0) 208 : cluster [INF] osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600] boot 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.689390+0000 mon.a (mon.0) 208 : cluster [INF] osd.2 [v2:192.168.123.105:6818/3958508600,v1:192.168.123.105:6819/3958508600] boot 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.689462+0000 mon.a (mon.0) 209 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:47.689462+0000 mon.a (mon.0) 209 : cluster [DBG] osdmap e108: 8 total, 8 up, 8 in 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: audit 2026-03-10T11:48:47.695678+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: audit 2026-03-10T11:48:47.695678+0000 mon.c (mon.1) 146 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:48.692599+0000 mon.a (mon.0) 210 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T11:48:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:48 vm07 bash[46158]: cluster 2026-03-10T11:48:48.692599+0000 mon.a (mon.0) 210 : cluster [DBG] osdmap e109: 8 total, 8 up, 8 in 2026-03-10T11:48:50.434 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.047961+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:50.434 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.047961+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:50.434 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.115851+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.115851+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.124088+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.124088+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.711477+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.711477+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.717855+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.435 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:50 vm07 bash[46158]: audit 2026-03-10T11:48:49.717855+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.047961+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.047961+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.115851+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.115851+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.124088+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.124088+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.711477+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.711477+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.717855+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:50 vm05 bash[68966]: audit 2026-03-10T11:48:49.717855+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.047961+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.047961+0000 mgr.y (mgr.44107) 132 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.115851+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.115851+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.124088+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.124088+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.711477+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.711477+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.717855+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:50.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:50 vm05 bash[65415]: audit 2026-03-10T11:48:49.717855+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:51.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:51 vm07 bash[46158]: cluster 2026-03-10T11:48:49.437994+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v50: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:51 vm07 bash[46158]: cluster 2026-03-10T11:48:49.437994+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v50: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:51 vm07 bash[46158]: audit 2026-03-10T11:48:50.476702+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:51 vm07 bash[46158]: audit 2026-03-10T11:48:50.476702+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:51 vm05 bash[68966]: cluster 2026-03-10T11:48:49.437994+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v50: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:51 vm05 bash[68966]: cluster 2026-03-10T11:48:49.437994+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v50: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:51 vm05 bash[68966]: audit 2026-03-10T11:48:50.476702+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:51 vm05 bash[68966]: audit 2026-03-10T11:48:50.476702+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:51 vm05 bash[65415]: cluster 2026-03-10T11:48:49.437994+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v50: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:51 vm05 bash[65415]: cluster 2026-03-10T11:48:49.437994+0000 mgr.y (mgr.44107) 133 : cluster [DBG] pgmap v50: 161 pgs: 31 active+undersized, 16 active+undersized+degraded, 114 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 59/627 objects degraded (9.410%) 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:51 vm05 bash[65415]: audit 2026-03-10T11:48:50.476702+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:51.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:51 vm05 bash[65415]: audit 2026-03-10T11:48:50.476702+0000 mon.c (mon.1) 147 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:48:53.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:53 vm07 bash[46158]: cluster 2026-03-10T11:48:51.438324+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v51: 161 pgs: 29 active+undersized, 15 active+undersized+degraded, 117 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 54/627 objects degraded (8.612%) 2026-03-10T11:48:53.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:53 vm07 bash[46158]: cluster 2026-03-10T11:48:51.438324+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v51: 161 pgs: 29 active+undersized, 15 active+undersized+degraded, 117 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 54/627 objects degraded (8.612%) 2026-03-10T11:48:53.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:53 vm05 bash[68966]: cluster 2026-03-10T11:48:51.438324+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v51: 161 pgs: 29 active+undersized, 15 active+undersized+degraded, 117 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 54/627 objects degraded (8.612%) 2026-03-10T11:48:53.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:53 vm05 bash[68966]: cluster 2026-03-10T11:48:51.438324+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v51: 161 pgs: 29 active+undersized, 15 active+undersized+degraded, 117 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 54/627 objects degraded (8.612%) 2026-03-10T11:48:53.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:53 vm05 bash[65415]: cluster 2026-03-10T11:48:51.438324+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v51: 161 pgs: 29 active+undersized, 15 active+undersized+degraded, 117 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 54/627 objects degraded (8.612%) 2026-03-10T11:48:53.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:53 vm05 bash[65415]: cluster 2026-03-10T11:48:51.438324+0000 mgr.y (mgr.44107) 134 : cluster [DBG] pgmap v51: 161 pgs: 29 active+undersized, 15 active+undersized+degraded, 117 active+clean; 457 KiB data, 144 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 54/627 objects degraded (8.612%) 2026-03-10T11:48:54.315 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:48:54.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:54 vm07 bash[46158]: cluster 2026-03-10T11:48:53.438803+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 758 B/s rd, 0 op/s 2026-03-10T11:48:54.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:54 vm07 bash[46158]: cluster 2026-03-10T11:48:53.438803+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 758 B/s rd, 0 op/s 2026-03-10T11:48:54.529 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:54 vm05 bash[68966]: cluster 2026-03-10T11:48:53.438803+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 758 B/s rd, 0 op/s 2026-03-10T11:48:54.529 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:54 vm05 bash[68966]: cluster 2026-03-10T11:48:53.438803+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 758 B/s rd, 0 op/s 2026-03-10T11:48:54.534 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:54 vm05 bash[65415]: cluster 2026-03-10T11:48:53.438803+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 758 B/s rd, 0 op/s 2026-03-10T11:48:54.534 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:54 vm05 bash[65415]: cluster 2026-03-10T11:48:53.438803+0000 mgr.y (mgr.44107) 135 : cluster [DBG] pgmap v52: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 758 B/s rd, 0 op/s 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (15m) 5s ago 22m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (2m) 73s ago 22m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (3m) 5s ago 21m 43.8M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (3m) 73s ago 24m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (12m) 5s ago 25m 525M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (84s) 5s ago 25m 43.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (2m) 73s ago 25m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (97s) 5s ago 25m 41.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (15m) 5s ago 22m 7900k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (15m) 73s ago 22m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (24m) 5s ago 24m 54.8M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (24m) 5s ago 24m 57.3M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (10s) 5s ago 24m 21.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (26s) 5s ago 23m 65.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (23m) 73s ago 23m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (23m) 73s ago 23m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (23m) 73s ago 23m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (22m) 73s ago 22m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (3m) 73s ago 22m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (21m) 5s ago 21m 88.8M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:48:54.768 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (21m) 73s ago 21m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8, 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:48:55.047 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) osd. Upgrade limited to 2 daemons (0 remaining).", 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "2/8 daemons upgraded", 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:48:55.260 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: cluster 2026-03-10T11:48:54.140717+0000 mon.a (mon.0) 215 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 54/627 objects degraded (8.612%), 15 pgs degraded) 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: cluster 2026-03-10T11:48:54.140717+0000 mon.a (mon.0) 215 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 54/627 objects degraded (8.612%), 15 pgs degraded) 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: cluster 2026-03-10T11:48:54.140731+0000 mon.a (mon.0) 216 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: cluster 2026-03-10T11:48:54.140731+0000 mon.a (mon.0) 216 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:54.309173+0000 mgr.y (mgr.44107) 136 : audit [DBG] from='client.54237 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:54.309173+0000 mgr.y (mgr.44107) 136 : audit [DBG] from='client.54237 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:54.509896+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:54.509896+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:54.766935+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:54.766935+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:55.048429+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.105:0/676559007' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:55 vm05 bash[65415]: audit 2026-03-10T11:48:55.048429+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.105:0/676559007' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: cluster 2026-03-10T11:48:54.140717+0000 mon.a (mon.0) 215 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 54/627 objects degraded (8.612%), 15 pgs degraded) 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: cluster 2026-03-10T11:48:54.140717+0000 mon.a (mon.0) 215 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 54/627 objects degraded (8.612%), 15 pgs degraded) 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: cluster 2026-03-10T11:48:54.140731+0000 mon.a (mon.0) 216 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: cluster 2026-03-10T11:48:54.140731+0000 mon.a (mon.0) 216 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:54.309173+0000 mgr.y (mgr.44107) 136 : audit [DBG] from='client.54237 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:54.309173+0000 mgr.y (mgr.44107) 136 : audit [DBG] from='client.54237 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:54.509896+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:54.509896+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:54.766935+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:54.766935+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:55.048429+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.105:0/676559007' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:55.398 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:55 vm05 bash[68966]: audit 2026-03-10T11:48:55.048429+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.105:0/676559007' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: cluster 2026-03-10T11:48:54.140717+0000 mon.a (mon.0) 215 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 54/627 objects degraded (8.612%), 15 pgs degraded) 2026-03-10T11:48:55.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: cluster 2026-03-10T11:48:54.140717+0000 mon.a (mon.0) 215 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 54/627 objects degraded (8.612%), 15 pgs degraded) 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: cluster 2026-03-10T11:48:54.140731+0000 mon.a (mon.0) 216 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: cluster 2026-03-10T11:48:54.140731+0000 mon.a (mon.0) 216 : cluster [INF] Cluster is now healthy 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:54.309173+0000 mgr.y (mgr.44107) 136 : audit [DBG] from='client.54237 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:54.309173+0000 mgr.y (mgr.44107) 136 : audit [DBG] from='client.54237 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:54.509896+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:54.509896+0000 mgr.y (mgr.44107) 137 : audit [DBG] from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:54.766935+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:54.766935+0000 mgr.y (mgr.44107) 138 : audit [DBG] from='client.44233 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:55.048429+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.105:0/676559007' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:55 vm07 bash[46158]: audit 2026-03-10T11:48:55.048429+0000 mon.b (mon.2) 25 : audit [DBG] from='client.? 192.168.123.105:0/676559007' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.264393+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.34277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.264393+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.34277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cluster 2026-03-10T11:48:55.439116+0000 mgr.y (mgr.44107) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cluster 2026-03-10T11:48:55.439116+0000 mgr.y (mgr.44107) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.454588+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.454588+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.463433+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.463433+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.465921+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.465921+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.466942+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.466942+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.471805+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.471805+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.517012+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.517012+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.518844+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.518844+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.520644+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.520644+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cephadm 2026-03-10T11:48:55.521669+0000 mgr.y (mgr.44107) 141 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cephadm 2026-03-10T11:48:55.521669+0000 mgr.y (mgr.44107) 141 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.526537+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.526537+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.529666+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.529666+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.533191+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.533191+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.534702+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.534702+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.536157+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.536157+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.537350+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.537350+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.538446+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.538446+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cephadm 2026-03-10T11:48:55.539151+0000 mgr.y (mgr.44107) 142 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:56.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cephadm 2026-03-10T11:48:55.539151+0000 mgr.y (mgr.44107) 142 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.540401+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.540401+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.540750+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.540750+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.264393+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.34277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.264393+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.34277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cluster 2026-03-10T11:48:55.439116+0000 mgr.y (mgr.44107) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cluster 2026-03-10T11:48:55.439116+0000 mgr.y (mgr.44107) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.454588+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.454588+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.463433+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.463433+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.465921+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.465921+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.466942+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.466942+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.471805+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.471805+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.517012+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.517012+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.518844+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.518844+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.520644+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.520644+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cephadm 2026-03-10T11:48:55.521669+0000 mgr.y (mgr.44107) 141 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cephadm 2026-03-10T11:48:55.521669+0000 mgr.y (mgr.44107) 141 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.526537+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.526537+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.529666+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.529666+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.533191+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.533191+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.534702+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.534702+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.536157+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.536157+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.537350+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.537350+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.538446+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.538446+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cephadm 2026-03-10T11:48:55.539151+0000 mgr.y (mgr.44107) 142 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cephadm 2026-03-10T11:48:55.539151+0000 mgr.y (mgr.44107) 142 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.540401+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.540401+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.540750+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.540750+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.544330+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.544330+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:56.842 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.546413+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.546413+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.546603+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.546603+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.549181+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.549181+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.551526+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.551526+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.551774+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.551774+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.555679+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.555679+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557093+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557093+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557408+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557408+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557864+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557864+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557984+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.557984+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.558492+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.558492+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.558606+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.558606+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.559045+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.559045+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.559253+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.559253+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.559745+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.559745+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.560048+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.560048+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.560517+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.560517+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.560670+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.560670+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561210+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561210+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561317+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561317+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561810+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561810+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561936+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.561936+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.562404+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.562404+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.562509+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.562509+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.567963+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.567963+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.569617+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.569617+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.569984+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.569984+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.570470+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.570470+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.570691+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.570691+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.843 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571169+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571169+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571296+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571296+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571761+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571761+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571971+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.571971+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.572381+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.572381+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.572602+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.572602+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.573038+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.573038+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.573249+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.573249+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cephadm 2026-03-10T11:48:55.573666+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: cephadm 2026-03-10T11:48:55.573666+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.573994+0000 mon.c (mon.1) 177 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.573994+0000 mon.c (mon.1) 177 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.574190+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.574190+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.578466+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.578466+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.578941+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.578941+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.580105+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.580105+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.580594+0000 mon.c (mon.1) 180 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.580594+0000 mon.c (mon.1) 180 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.584462+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.584462+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.626338+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.626338+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.627537+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.627537+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.628099+0000 mon.c (mon.1) 183 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.628099+0000 mon.c (mon.1) 183 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.632784+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:56 vm05 bash[65415]: audit 2026-03-10T11:48:55.632784+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.544330+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.544330+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.546413+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.546413+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.546603+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.546603+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.549181+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.549181+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.551526+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.551526+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.551774+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.551774+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.555679+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:56.844 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.555679+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557093+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557093+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557408+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557408+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557864+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557864+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557984+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.557984+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.558492+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.558492+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.558606+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.558606+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.559045+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.559045+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.559253+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.559253+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.559745+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.559745+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.560048+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.560048+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.560517+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.560517+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.560670+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.560670+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561210+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561210+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561317+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561317+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561810+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561810+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561936+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.561936+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.562404+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.562404+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.562509+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.562509+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.567963+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.567963+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.569617+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.569617+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.569984+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.569984+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.570470+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.570470+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.570691+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.570691+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571169+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571169+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571296+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571296+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571761+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571761+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571971+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.571971+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.572381+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.572381+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.572602+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.572602+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.573038+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.573038+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.573249+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.573249+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.845 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cephadm 2026-03-10T11:48:55.573666+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: cephadm 2026-03-10T11:48:55.573666+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.573994+0000 mon.c (mon.1) 177 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.573994+0000 mon.c (mon.1) 177 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.574190+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.574190+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.578466+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.578466+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.578941+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.578941+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.580105+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.580105+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.580594+0000 mon.c (mon.1) 180 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.580594+0000 mon.c (mon.1) 180 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.584462+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.584462+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.626338+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.626338+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.627537+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.627537+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.628099+0000 mon.c (mon.1) 183 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.628099+0000 mon.c (mon.1) 183 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.632784+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.846 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:56 vm05 bash[68966]: audit 2026-03-10T11:48:55.632784+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.264393+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.34277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.264393+0000 mgr.y (mgr.44107) 139 : audit [DBG] from='client.34277 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cluster 2026-03-10T11:48:55.439116+0000 mgr.y (mgr.44107) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cluster 2026-03-10T11:48:55.439116+0000 mgr.y (mgr.44107) 140 : cluster [DBG] pgmap v53: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.454588+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.454588+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.463433+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.463433+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.465921+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.465921+0000 mon.c (mon.1) 148 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.466942+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.466942+0000 mon.c (mon.1) 149 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.471805+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.471805+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.517012+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.517012+0000 mon.c (mon.1) 150 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.518844+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.518844+0000 mon.c (mon.1) 151 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.520644+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.520644+0000 mon.c (mon.1) 152 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cephadm 2026-03-10T11:48:55.521669+0000 mgr.y (mgr.44107) 141 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cephadm 2026-03-10T11:48:55.521669+0000 mgr.y (mgr.44107) 141 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.526537+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.526537+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.529666+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.529666+0000 mon.c (mon.1) 153 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.533191+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.533191+0000 mon.c (mon.1) 154 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.534702+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.534702+0000 mon.c (mon.1) 155 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.536157+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.536157+0000 mon.c (mon.1) 156 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.537350+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.537350+0000 mon.c (mon.1) 157 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.538446+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.538446+0000 mon.c (mon.1) 158 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cephadm 2026-03-10T11:48:55.539151+0000 mgr.y (mgr.44107) 142 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cephadm 2026-03-10T11:48:55.539151+0000 mgr.y (mgr.44107) 142 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.540401+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.540401+0000 mon.c (mon.1) 159 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.540750+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.540750+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.544330+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.544330+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.546413+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.546413+0000 mon.c (mon.1) 160 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.546603+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.546603+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.549181+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.549181+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.551526+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.551526+0000 mon.c (mon.1) 161 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.551774+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.551774+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.555679+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.555679+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557093+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557093+0000 mon.c (mon.1) 162 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557408+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557408+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557864+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557864+0000 mon.c (mon.1) 163 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557984+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.557984+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.558492+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.558492+0000 mon.c (mon.1) 164 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.558606+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.558606+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.559045+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.559045+0000 mon.c (mon.1) 165 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.559253+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.559253+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.559745+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.559745+0000 mon.c (mon.1) 166 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.560048+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.560048+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.560517+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.560517+0000 mon.c (mon.1) 167 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.560670+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.560670+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561210+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561210+0000 mon.c (mon.1) 168 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561317+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561317+0000 mon.a (mon.0) 233 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561810+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561810+0000 mon.c (mon.1) 169 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561936+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.561936+0000 mon.a (mon.0) 234 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.562404+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.562404+0000 mon.c (mon.1) 170 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.562509+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.562509+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.567963+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.567963+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.569617+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.569617+0000 mon.c (mon.1) 171 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.569984+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.569984+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.570470+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.570470+0000 mon.c (mon.1) 172 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.570691+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.570691+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571169+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571169+0000 mon.c (mon.1) 173 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571296+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571296+0000 mon.a (mon.0) 239 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571761+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571761+0000 mon.c (mon.1) 174 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571971+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.571971+0000 mon.a (mon.0) 240 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.572381+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.572381+0000 mon.c (mon.1) 175 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.572602+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.572602+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.573038+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.573038+0000 mon.c (mon.1) 176 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.573249+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.573249+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cephadm 2026-03-10T11:48:55.573666+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: cephadm 2026-03-10T11:48:55.573666+0000 mgr.y (mgr.44107) 143 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.573994+0000 mon.c (mon.1) 177 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.573994+0000 mon.c (mon.1) 177 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.574190+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.574190+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.578466+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.578466+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.578941+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.578941+0000 mon.c (mon.1) 178 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.580105+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.580105+0000 mon.c (mon.1) 179 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.580594+0000 mon.c (mon.1) 180 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.580594+0000 mon.c (mon.1) 180 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.584462+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.584462+0000 mon.a (mon.0) 245 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.626338+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.626338+0000 mon.c (mon.1) 181 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.627537+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.948 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.627537+0000 mon.c (mon.1) 182 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:48:56.949 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.628099+0000 mon.c (mon.1) 183 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.949 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.628099+0000 mon.c (mon.1) 183 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:48:56.949 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.632784+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:56.949 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:56 vm07 bash[46158]: audit 2026-03-10T11:48:55.632784+0000 mon.a (mon.0) 246 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:48:58.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:58 vm05 bash[65415]: cluster 2026-03-10T11:48:57.439755+0000 mgr.y (mgr.44107) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:48:58.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:58 vm05 bash[65415]: cluster 2026-03-10T11:48:57.439755+0000 mgr.y (mgr.44107) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:48:58.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:58 vm05 bash[68966]: cluster 2026-03-10T11:48:57.439755+0000 mgr.y (mgr.44107) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:48:58.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:58 vm05 bash[68966]: cluster 2026-03-10T11:48:57.439755+0000 mgr.y (mgr.44107) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:48:58.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:58 vm07 bash[46158]: cluster 2026-03-10T11:48:57.439755+0000 mgr.y (mgr.44107) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:48:58.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:58 vm07 bash[46158]: cluster 2026-03-10T11:48:57.439755+0000 mgr.y (mgr.44107) 144 : cluster [DBG] pgmap v54: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:48:59.340 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:48:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:48:58] "GET /metrics HTTP/1.1" 200 37526 "" "Prometheus/2.51.0" 2026-03-10T11:48:59.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:59 vm05 bash[68966]: audit 2026-03-10T11:48:59.057926+0000 mgr.y (mgr.44107) 145 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:59.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:48:59 vm05 bash[68966]: audit 2026-03-10T11:48:59.057926+0000 mgr.y (mgr.44107) 145 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:59.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:59 vm05 bash[65415]: audit 2026-03-10T11:48:59.057926+0000 mgr.y (mgr.44107) 145 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:59.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:48:59 vm05 bash[65415]: audit 2026-03-10T11:48:59.057926+0000 mgr.y (mgr.44107) 145 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:59.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:59 vm07 bash[46158]: audit 2026-03-10T11:48:59.057926+0000 mgr.y (mgr.44107) 145 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:48:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:48:59 vm07 bash[46158]: audit 2026-03-10T11:48:59.057926+0000 mgr.y (mgr.44107) 145 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:01.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:01 vm05 bash[68966]: cluster 2026-03-10T11:48:59.440069+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s 2026-03-10T11:49:01.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:01 vm05 bash[68966]: cluster 2026-03-10T11:48:59.440069+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s 2026-03-10T11:49:01.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:01 vm05 bash[65415]: cluster 2026-03-10T11:48:59.440069+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s 2026-03-10T11:49:01.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:01 vm05 bash[65415]: cluster 2026-03-10T11:48:59.440069+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s 2026-03-10T11:49:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:01 vm07 bash[46158]: cluster 2026-03-10T11:48:59.440069+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s 2026-03-10T11:49:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:01 vm07 bash[46158]: cluster 2026-03-10T11:48:59.440069+0000 mgr.y (mgr.44107) 146 : cluster [DBG] pgmap v55: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 952 B/s rd, 0 op/s 2026-03-10T11:49:02.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:02 vm07 bash[46158]: audit 2026-03-10T11:49:00.764841+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:02.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:02 vm07 bash[46158]: audit 2026-03-10T11:49:00.764841+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:02.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:02 vm05 bash[68966]: audit 2026-03-10T11:49:00.764841+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:02.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:02 vm05 bash[68966]: audit 2026-03-10T11:49:00.764841+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:02.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:02 vm05 bash[65415]: audit 2026-03-10T11:49:00.764841+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:02.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:02 vm05 bash[65415]: audit 2026-03-10T11:49:00.764841+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:03 vm07 bash[46158]: cluster 2026-03-10T11:49:01.440420+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:03.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:03 vm07 bash[46158]: cluster 2026-03-10T11:49:01.440420+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:03.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:03 vm05 bash[68966]: cluster 2026-03-10T11:49:01.440420+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:03.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:03 vm05 bash[68966]: cluster 2026-03-10T11:49:01.440420+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:03.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:03 vm05 bash[65415]: cluster 2026-03-10T11:49:01.440420+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:03.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:03 vm05 bash[65415]: cluster 2026-03-10T11:49:01.440420+0000 mgr.y (mgr.44107) 147 : cluster [DBG] pgmap v56: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:04.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:04 vm05 bash[65415]: cluster 2026-03-10T11:49:03.440893+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:04.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:04 vm05 bash[65415]: cluster 2026-03-10T11:49:03.440893+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:04.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:04 vm05 bash[68966]: cluster 2026-03-10T11:49:03.440893+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:04.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:04 vm05 bash[68966]: cluster 2026-03-10T11:49:03.440893+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:04.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:04 vm07 bash[46158]: cluster 2026-03-10T11:49:03.440893+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:04.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:04 vm07 bash[46158]: cluster 2026-03-10T11:49:03.440893+0000 mgr.y (mgr.44107) 148 : cluster [DBG] pgmap v57: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:06 vm05 bash[65415]: cluster 2026-03-10T11:49:05.441193+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:06 vm05 bash[65415]: cluster 2026-03-10T11:49:05.441193+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:06 vm05 bash[65415]: audit 2026-03-10T11:49:05.482499+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:06 vm05 bash[65415]: audit 2026-03-10T11:49:05.482499+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:06 vm05 bash[65415]: audit 2026-03-10T11:49:05.486250+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:06 vm05 bash[65415]: audit 2026-03-10T11:49:05.486250+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:06 vm05 bash[68966]: cluster 2026-03-10T11:49:05.441193+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:06 vm05 bash[68966]: cluster 2026-03-10T11:49:05.441193+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:06 vm05 bash[68966]: audit 2026-03-10T11:49:05.482499+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:06 vm05 bash[68966]: audit 2026-03-10T11:49:05.482499+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:06 vm05 bash[68966]: audit 2026-03-10T11:49:05.486250+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:07.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:06 vm05 bash[68966]: audit 2026-03-10T11:49:05.486250+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:06 vm07 bash[46158]: cluster 2026-03-10T11:49:05.441193+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:06 vm07 bash[46158]: cluster 2026-03-10T11:49:05.441193+0000 mgr.y (mgr.44107) 149 : cluster [DBG] pgmap v58: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:06 vm07 bash[46158]: audit 2026-03-10T11:49:05.482499+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:06 vm07 bash[46158]: audit 2026-03-10T11:49:05.482499+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:06 vm07 bash[46158]: audit 2026-03-10T11:49:05.486250+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:06 vm07 bash[46158]: audit 2026-03-10T11:49:05.486250+0000 mon.c (mon.1) 184 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:09.340 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:49:08] "GET /metrics HTTP/1.1" 200 37666 "" "Prometheus/2.51.0" 2026-03-10T11:49:09.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:09 vm05 bash[68966]: cluster 2026-03-10T11:49:07.441640+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:09.875 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:09 vm05 bash[68966]: cluster 2026-03-10T11:49:07.441640+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:09.876 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:09 vm05 bash[68966]: audit 2026-03-10T11:49:09.062410+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:09.876 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:09 vm05 bash[68966]: audit 2026-03-10T11:49:09.062410+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:09.876 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:09 vm05 bash[65415]: cluster 2026-03-10T11:49:07.441640+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:09.876 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:09 vm05 bash[65415]: cluster 2026-03-10T11:49:07.441640+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:09.876 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:09 vm05 bash[65415]: audit 2026-03-10T11:49:09.062410+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:09.876 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:09 vm05 bash[65415]: audit 2026-03-10T11:49:09.062410+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:09.945 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:09 vm07 bash[46158]: cluster 2026-03-10T11:49:07.441640+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:09.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:09 vm07 bash[46158]: cluster 2026-03-10T11:49:07.441640+0000 mgr.y (mgr.44107) 150 : cluster [DBG] pgmap v59: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:09.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:09 vm07 bash[46158]: audit 2026-03-10T11:49:09.062410+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:09.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:09 vm07 bash[46158]: audit 2026-03-10T11:49:09.062410+0000 mgr.y (mgr.44107) 151 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:11.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:10 vm05 bash[68966]: cluster 2026-03-10T11:49:09.441939+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:11.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:10 vm05 bash[68966]: cluster 2026-03-10T11:49:09.441939+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:10 vm05 bash[65415]: cluster 2026-03-10T11:49:09.441939+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:10 vm05 bash[65415]: cluster 2026-03-10T11:49:09.441939+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:11.195 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:10 vm07 bash[46158]: cluster 2026-03-10T11:49:09.441939+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:10 vm07 bash[46158]: cluster 2026-03-10T11:49:09.441939+0000 mgr.y (mgr.44107) 152 : cluster [DBG] pgmap v60: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:13.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:13 vm05 bash[68966]: cluster 2026-03-10T11:49:11.442260+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:13.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:13 vm05 bash[68966]: cluster 2026-03-10T11:49:11.442260+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:13 vm05 bash[65415]: cluster 2026-03-10T11:49:11.442260+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:13 vm05 bash[65415]: cluster 2026-03-10T11:49:11.442260+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:13 vm07 bash[46158]: cluster 2026-03-10T11:49:11.442260+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:13.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:13 vm07 bash[46158]: cluster 2026-03-10T11:49:11.442260+0000 mgr.y (mgr.44107) 153 : cluster [DBG] pgmap v61: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:15.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:15 vm05 bash[68966]: cluster 2026-03-10T11:49:13.442720+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:15.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:15 vm05 bash[68966]: cluster 2026-03-10T11:49:13.442720+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:15.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:15 vm05 bash[65415]: cluster 2026-03-10T11:49:13.442720+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:15.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:15 vm05 bash[65415]: cluster 2026-03-10T11:49:13.442720+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:15 vm07 bash[46158]: cluster 2026-03-10T11:49:13.442720+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:15.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:15 vm07 bash[46158]: cluster 2026-03-10T11:49:13.442720+0000 mgr.y (mgr.44107) 154 : cluster [DBG] pgmap v62: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:17 vm05 bash[68966]: cluster 2026-03-10T11:49:15.442954+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:17 vm05 bash[68966]: cluster 2026-03-10T11:49:15.442954+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:17 vm05 bash[65415]: cluster 2026-03-10T11:49:15.442954+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:17.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:17 vm05 bash[65415]: cluster 2026-03-10T11:49:15.442954+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:17 vm07 bash[46158]: cluster 2026-03-10T11:49:15.442954+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:17.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:17 vm07 bash[46158]: cluster 2026-03-10T11:49:15.442954+0000 mgr.y (mgr.44107) 155 : cluster [DBG] pgmap v63: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:19.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:19 vm05 bash[65415]: cluster 2026-03-10T11:49:17.443408+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:19.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:19 vm05 bash[65415]: cluster 2026-03-10T11:49:17.443408+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:19.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:19 vm05 bash[68966]: cluster 2026-03-10T11:49:17.443408+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:19.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:19 vm05 bash[68966]: cluster 2026-03-10T11:49:17.443408+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:19.341 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:49:18] "GET /metrics HTTP/1.1" 200 37666 "" "Prometheus/2.51.0" 2026-03-10T11:49:19.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:19 vm07 bash[46158]: cluster 2026-03-10T11:49:17.443408+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:19.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:19 vm07 bash[46158]: cluster 2026-03-10T11:49:17.443408+0000 mgr.y (mgr.44107) 156 : cluster [DBG] pgmap v64: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:20 vm05 bash[65415]: audit 2026-03-10T11:49:19.072360+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:20 vm05 bash[65415]: audit 2026-03-10T11:49:19.072360+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:20 vm05 bash[68966]: audit 2026-03-10T11:49:19.072360+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:20 vm05 bash[68966]: audit 2026-03-10T11:49:19.072360+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:20.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:20 vm07 bash[46158]: audit 2026-03-10T11:49:19.072360+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:20.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:20 vm07 bash[46158]: audit 2026-03-10T11:49:19.072360+0000 mgr.y (mgr.44107) 157 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:21 vm05 bash[65415]: cluster 2026-03-10T11:49:19.443639+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:21 vm05 bash[65415]: cluster 2026-03-10T11:49:19.443639+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:21.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:21 vm05 bash[65415]: audit 2026-03-10T11:49:20.478145+0000 mon.c (mon.1) 185 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:21.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:21 vm05 bash[65415]: audit 2026-03-10T11:49:20.478145+0000 mon.c (mon.1) 185 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:21.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:21 vm05 bash[68966]: cluster 2026-03-10T11:49:19.443639+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:21.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:21 vm05 bash[68966]: cluster 2026-03-10T11:49:19.443639+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:21.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:21 vm05 bash[68966]: audit 2026-03-10T11:49:20.478145+0000 mon.c (mon.1) 185 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:21.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:21 vm05 bash[68966]: audit 2026-03-10T11:49:20.478145+0000 mon.c (mon.1) 185 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:21 vm07 bash[46158]: cluster 2026-03-10T11:49:19.443639+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:21 vm07 bash[46158]: cluster 2026-03-10T11:49:19.443639+0000 mgr.y (mgr.44107) 158 : cluster [DBG] pgmap v65: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:21 vm07 bash[46158]: audit 2026-03-10T11:49:20.478145+0000 mon.c (mon.1) 185 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:21 vm07 bash[46158]: audit 2026-03-10T11:49:20.478145+0000 mon.c (mon.1) 185 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:23.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:23 vm05 bash[65415]: cluster 2026-03-10T11:49:21.444010+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:23.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:23 vm05 bash[65415]: cluster 2026-03-10T11:49:21.444010+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:23.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:23 vm05 bash[68966]: cluster 2026-03-10T11:49:21.444010+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:23.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:23 vm05 bash[68966]: cluster 2026-03-10T11:49:21.444010+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:23.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:23 vm07 bash[46158]: cluster 2026-03-10T11:49:21.444010+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:23 vm07 bash[46158]: cluster 2026-03-10T11:49:21.444010+0000 mgr.y (mgr.44107) 159 : cluster [DBG] pgmap v66: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:25 vm05 bash[65415]: cluster 2026-03-10T11:49:23.444455+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:25 vm05 bash[65415]: cluster 2026-03-10T11:49:23.444455+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:25.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:25 vm05 bash[68966]: cluster 2026-03-10T11:49:23.444455+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:25.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:25 vm05 bash[68966]: cluster 2026-03-10T11:49:23.444455+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:25 vm07 bash[46158]: cluster 2026-03-10T11:49:23.444455+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:25 vm07 bash[46158]: cluster 2026-03-10T11:49:23.444455+0000 mgr.y (mgr.44107) 160 : cluster [DBG] pgmap v67: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:25.513 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (15m) 36s ago 22m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (3m) 104s ago 22m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (3m) 36s ago 22m 43.8M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (3m) 104s ago 25m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (13m) 36s ago 26m 525M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (115s) 36s ago 26m 43.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (2m) 104s ago 25m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (2m) 36s ago 25m 41.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (15m) 36s ago 23m 7900k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (15m) 104s ago 23m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (25m) 36s ago 25m 54.8M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (25m) 36s ago 25m 57.3M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (42s) 36s ago 24m 21.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (58s) 36s ago 24m 65.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (24m) 104s ago 24m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (24m) 104s ago 24m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (23m) 104s ago 23m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (23m) 104s ago 23m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (3m) 104s ago 22m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (22m) 36s ago 22m 88.8M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:49:25.923 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (22m) 104s ago 22m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:49:25.971 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | length == 2'"'"'' 2026-03-10T11:49:26.482 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:49:26.522 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 7'"'"'' 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: cluster 2026-03-10T11:49:25.444749+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: cluster 2026-03-10T11:49:25.444749+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: audit 2026-03-10T11:49:25.453939+0000 mgr.y (mgr.44107) 162 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: audit 2026-03-10T11:49:25.453939+0000 mgr.y (mgr.44107) 162 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: audit 2026-03-10T11:49:25.923220+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.34283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: audit 2026-03-10T11:49:25.923220+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.34283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: audit 2026-03-10T11:49:26.470546+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.105:0/2676983807' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:27 vm05 bash[68966]: audit 2026-03-10T11:49:26.470546+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.105:0/2676983807' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: cluster 2026-03-10T11:49:25.444749+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:27.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: cluster 2026-03-10T11:49:25.444749+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:27.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: audit 2026-03-10T11:49:25.453939+0000 mgr.y (mgr.44107) 162 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: audit 2026-03-10T11:49:25.453939+0000 mgr.y (mgr.44107) 162 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: audit 2026-03-10T11:49:25.923220+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.34283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: audit 2026-03-10T11:49:25.923220+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.34283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: audit 2026-03-10T11:49:26.470546+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.105:0/2676983807' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:27.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:27 vm05 bash[65415]: audit 2026-03-10T11:49:26.470546+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.105:0/2676983807' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: cluster 2026-03-10T11:49:25.444749+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: cluster 2026-03-10T11:49:25.444749+0000 mgr.y (mgr.44107) 161 : cluster [DBG] pgmap v68: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: audit 2026-03-10T11:49:25.453939+0000 mgr.y (mgr.44107) 162 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: audit 2026-03-10T11:49:25.453939+0000 mgr.y (mgr.44107) 162 : audit [DBG] from='client.54264 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: audit 2026-03-10T11:49:25.923220+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.34283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: audit 2026-03-10T11:49:25.923220+0000 mgr.y (mgr.44107) 163 : audit [DBG] from='client.34283 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: audit 2026-03-10T11:49:26.470546+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.105:0/2676983807' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:27 vm07 bash[46158]: audit 2026-03-10T11:49:26.470546+0000 mon.b (mon.2) 26 : audit [DBG] from='client.? 192.168.123.105:0/2676983807' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:28.381 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:49:28.381 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:28 vm05 bash[68966]: audit 2026-03-10T11:49:26.952155+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.44260 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:28.381 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:28 vm05 bash[68966]: audit 2026-03-10T11:49:26.952155+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.44260 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:28.381 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:28 vm05 bash[65415]: audit 2026-03-10T11:49:26.952155+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.44260 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:28.381 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:28 vm05 bash[65415]: audit 2026-03-10T11:49:26.952155+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.44260 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:28.425 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:49:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:28 vm07 bash[46158]: audit 2026-03-10T11:49:26.952155+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.44260 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:28 vm07 bash[46158]: audit 2026-03-10T11:49:26.952155+0000 mgr.y (mgr.44107) 164 : audit [DBG] from='client.44260 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:49:28.847 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:49:28.899 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:49:28] "GET /metrics HTTP/1.1" 200 37668 "" "Prometheus/2.51.0" 2026-03-10T11:49:28.899 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:49:29.340 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:49:29.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:29 vm05 bash[68966]: cluster 2026-03-10T11:49:27.445240+0000 mgr.y (mgr.44107) 165 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:29.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:29 vm05 bash[68966]: cluster 2026-03-10T11:49:27.445240+0000 mgr.y (mgr.44107) 165 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:29.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:29 vm05 bash[65415]: cluster 2026-03-10T11:49:27.445240+0000 mgr.y (mgr.44107) 165 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:29.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:29 vm05 bash[65415]: cluster 2026-03-10T11:49:27.445240+0000 mgr.y (mgr.44107) 165 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:29.392 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd --limit 1' 2026-03-10T11:49:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:29 vm07 bash[46158]: cluster 2026-03-10T11:49:27.445240+0000 mgr.y (mgr.44107) 165 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:29 vm07 bash[46158]: cluster 2026-03-10T11:49:27.445240+0000 mgr.y (mgr.44107) 165 : cluster [DBG] pgmap v69: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:30 vm07 bash[46158]: audit 2026-03-10T11:49:28.851859+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.34295 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:30 vm07 bash[46158]: audit 2026-03-10T11:49:28.851859+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.34295 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:30 vm07 bash[46158]: audit 2026-03-10T11:49:29.081345+0000 mgr.y (mgr.44107) 167 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:30 vm07 bash[46158]: audit 2026-03-10T11:49:29.081345+0000 mgr.y (mgr.44107) 167 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:30 vm07 bash[46158]: audit 2026-03-10T11:49:29.340683+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.105:0/1983004556' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:49:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:30 vm07 bash[46158]: audit 2026-03-10T11:49:29.340683+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.105:0/1983004556' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:49:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:30 vm05 bash[65415]: audit 2026-03-10T11:49:28.851859+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.34295 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:30 vm05 bash[65415]: audit 2026-03-10T11:49:28.851859+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.34295 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:30 vm05 bash[65415]: audit 2026-03-10T11:49:29.081345+0000 mgr.y (mgr.44107) 167 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:30 vm05 bash[65415]: audit 2026-03-10T11:49:29.081345+0000 mgr.y (mgr.44107) 167 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:30 vm05 bash[65415]: audit 2026-03-10T11:49:29.340683+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.105:0/1983004556' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:30 vm05 bash[65415]: audit 2026-03-10T11:49:29.340683+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.105:0/1983004556' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:30 vm05 bash[68966]: audit 2026-03-10T11:49:28.851859+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.34295 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:30 vm05 bash[68966]: audit 2026-03-10T11:49:28.851859+0000 mgr.y (mgr.44107) 166 : audit [DBG] from='client.34295 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:30 vm05 bash[68966]: audit 2026-03-10T11:49:29.081345+0000 mgr.y (mgr.44107) 167 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:30 vm05 bash[68966]: audit 2026-03-10T11:49:29.081345+0000 mgr.y (mgr.44107) 167 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:30 vm05 bash[68966]: audit 2026-03-10T11:49:29.340683+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.105:0/1983004556' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:49:30.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:30 vm05 bash[68966]: audit 2026-03-10T11:49:29.340683+0000 mon.b (mon.2) 27 : audit [DBG] from='client.? 192.168.123.105:0/1983004556' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:49:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:31 vm07 bash[46158]: cluster 2026-03-10T11:49:29.445552+0000 mgr.y (mgr.44107) 168 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:31 vm07 bash[46158]: cluster 2026-03-10T11:49:29.445552+0000 mgr.y (mgr.44107) 168 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:31 vm07 bash[46158]: audit 2026-03-10T11:49:29.830121+0000 mgr.y (mgr.44107) 169 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:31 vm07 bash[46158]: audit 2026-03-10T11:49:29.830121+0000 mgr.y (mgr.44107) 169 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:31.515 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:31 vm05 bash[68966]: cluster 2026-03-10T11:49:29.445552+0000 mgr.y (mgr.44107) 168 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:31 vm05 bash[68966]: cluster 2026-03-10T11:49:29.445552+0000 mgr.y (mgr.44107) 168 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:31 vm05 bash[68966]: audit 2026-03-10T11:49:29.830121+0000 mgr.y (mgr.44107) 169 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:31 vm05 bash[68966]: audit 2026-03-10T11:49:29.830121+0000 mgr.y (mgr.44107) 169 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:31 vm05 bash[65415]: cluster 2026-03-10T11:49:29.445552+0000 mgr.y (mgr.44107) 168 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:31 vm05 bash[65415]: cluster 2026-03-10T11:49:29.445552+0000 mgr.y (mgr.44107) 168 : cluster [DBG] pgmap v70: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:31 vm05 bash[65415]: audit 2026-03-10T11:49:29.830121+0000 mgr.y (mgr.44107) 169 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:31.526 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:31 vm05 bash[65415]: audit 2026-03-10T11:49:29.830121+0000 mgr.y (mgr.44107) 169 : audit [DBG] from='client.54294 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "limit": 1, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:31.587 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:49:32.065 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:49:32.447 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (16m) 43s ago 22m 14.4M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (3m) 110s ago 22m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (3m) 43s ago 22m 43.8M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (3m) 110s ago 25m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (13m) 43s ago 26m 525M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (2m) 43s ago 26m 43.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (2m) 110s ago 25m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (2m) 43s ago 25m 41.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (16m) 43s ago 23m 7900k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (16m) 110s ago 23m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (25m) 43s ago 25m 54.8M 4096M 17.2.0 e1d6a67b021e 767dc4919d3a 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (25m) 43s ago 25m 57.3M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (48s) 43s ago 24m 21.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (64s) 43s ago 24m 65.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (24m) 110s ago 24m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (24m) 110s ago 24m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (23m) 110s ago 23m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (23m) 110s ago 23m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (3m) 110s ago 23m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (22m) 43s ago 22m 88.8M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:49:32.448 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (22m) 110s ago 22m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:49:32.681 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:49:32.681 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:49:32.681 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:49:32.681 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:49:32.681 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 6, 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 8, 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:49:32.682 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:49:32.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: cluster 2026-03-10T11:49:31.445861+0000 mgr.y (mgr.44107) 170 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: cluster 2026-03-10T11:49:31.445861+0000 mgr.y (mgr.44107) 170 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: cephadm 2026-03-10T11:49:31.472414+0000 mgr.y (mgr.44107) 171 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: cephadm 2026-03-10T11:49:31.472414+0000 mgr.y (mgr.44107) 171 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.516215+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.516215+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.519785+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.519785+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.524699+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.524699+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.525648+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.525648+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.565941+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:31.565941+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: cephadm 2026-03-10T11:49:31.615098+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: cephadm 2026-03-10T11:49:31.615098+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:32.058804+0000 mgr.y (mgr.44107) 173 : audit [DBG] from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:32 vm05 bash[65415]: audit 2026-03-10T11:49:32.058804+0000 mgr.y (mgr.44107) 173 : audit [DBG] from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: cluster 2026-03-10T11:49:31.445861+0000 mgr.y (mgr.44107) 170 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: cluster 2026-03-10T11:49:31.445861+0000 mgr.y (mgr.44107) 170 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: cephadm 2026-03-10T11:49:31.472414+0000 mgr.y (mgr.44107) 171 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: cephadm 2026-03-10T11:49:31.472414+0000 mgr.y (mgr.44107) 171 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.516215+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.516215+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.519785+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.519785+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.524699+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.524699+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.525648+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.525648+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.565941+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:31.565941+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: cephadm 2026-03-10T11:49:31.615098+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: cephadm 2026-03-10T11:49:31.615098+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:32.058804+0000 mgr.y (mgr.44107) 173 : audit [DBG] from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:32.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:32 vm05 bash[68966]: audit 2026-03-10T11:49:32.058804+0000 mgr.y (mgr.44107) 173 : audit [DBG] from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) crash,osd. Upgrade limited to 1 daemons (1 remaining).", 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:49:32.879 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: cluster 2026-03-10T11:49:31.445861+0000 mgr.y (mgr.44107) 170 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: cluster 2026-03-10T11:49:31.445861+0000 mgr.y (mgr.44107) 170 : cluster [DBG] pgmap v71: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: cephadm 2026-03-10T11:49:31.472414+0000 mgr.y (mgr.44107) 171 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: cephadm 2026-03-10T11:49:31.472414+0000 mgr.y (mgr.44107) 171 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.516215+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.516215+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.519785+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.519785+0000 mon.c (mon.1) 186 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.524699+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.524699+0000 mon.c (mon.1) 187 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.525648+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.525648+0000 mon.c (mon.1) 188 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.565941+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:31.565941+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: cephadm 2026-03-10T11:49:31.615098+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: cephadm 2026-03-10T11:49:31.615098+0000 mgr.y (mgr.44107) 172 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:32.058804+0000 mgr.y (mgr.44107) 173 : audit [DBG] from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:32.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:32 vm07 bash[46158]: audit 2026-03-10T11:49:32.058804+0000 mgr.y (mgr.44107) 173 : audit [DBG] from='client.34304 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.256304+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.256304+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.448244+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.448244+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.685880+0000 mon.a (mon.0) 251 : audit [DBG] from='client.? 192.168.123.105:0/1407902078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.685880+0000 mon.a (mon.0) 251 : audit [DBG] from='client.? 192.168.123.105:0/1407902078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.879013+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:32.879013+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.055765+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.055765+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.056595+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.056595+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.056621+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.056621+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.057483+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.057483+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.058573+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.058573+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.059088+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.059088+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.062132+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.062132+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.064087+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.064087+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.064593+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.064593+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.067506+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.067506+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.069178+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.069178+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.069669+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.069669+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.072147+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.072147+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.073892+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.073892+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.073991+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.073991+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.074590+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.074590+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.465548+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.465548+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.747 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.467473+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.467473+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.468044+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:33 vm05 bash[68966]: audit 2026-03-10T11:49:33.468044+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.256304+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.256304+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.448244+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.448244+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.685880+0000 mon.a (mon.0) 251 : audit [DBG] from='client.? 192.168.123.105:0/1407902078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.685880+0000 mon.a (mon.0) 251 : audit [DBG] from='client.? 192.168.123.105:0/1407902078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.879013+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:32.879013+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.055765+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.055765+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.056595+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.056595+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.056621+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.056621+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.057483+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.057483+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.058573+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.058573+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.059088+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.059088+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.062132+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.062132+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.064087+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.064087+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.064593+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.064593+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.067506+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.067506+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.069178+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.069178+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.069669+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.069669+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.072147+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.072147+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.073892+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.073892+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.073991+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.073991+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.074590+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.074590+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.465548+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.465548+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.467473+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.467473+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.468044+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:33.748 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:33 vm05 bash[65415]: audit 2026-03-10T11:49:33.468044+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.256304+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.256304+0000 mgr.y (mgr.44107) 174 : audit [DBG] from='client.54306 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.448244+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.448244+0000 mgr.y (mgr.44107) 175 : audit [DBG] from='client.54312 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.685880+0000 mon.a (mon.0) 251 : audit [DBG] from='client.? 192.168.123.105:0/1407902078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.685880+0000 mon.a (mon.0) 251 : audit [DBG] from='client.? 192.168.123.105:0/1407902078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.879013+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:32.879013+0000 mgr.y (mgr.44107) 176 : audit [DBG] from='client.54324 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.055765+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.055765+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.056595+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.056595+0000 mgr.y (mgr.44107) 177 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.056621+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.056621+0000 mgr.y (mgr.44107) 178 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.057483+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.057483+0000 mon.c (mon.1) 189 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.058573+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.058573+0000 mon.c (mon.1) 190 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.059088+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.059088+0000 mgr.y (mgr.44107) 179 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.062132+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.062132+0000 mon.a (mon.0) 253 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.064087+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.064087+0000 mon.c (mon.1) 191 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.064593+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.064593+0000 mgr.y (mgr.44107) 180 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.067506+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.067506+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.069178+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.069178+0000 mon.c (mon.1) 192 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.069669+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.069669+0000 mgr.y (mgr.44107) 181 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.072147+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.072147+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.073892+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.073892+0000 mon.c (mon.1) 193 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.073991+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.073991+0000 mgr.y (mgr.44107) 182 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["0"], "max": 16}]: dispatch 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.074590+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.074590+0000 mgr.y (mgr.44107) 183 : cephadm [INF] Upgrade: osd.0 is safe to restart 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.465548+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.465548+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.467473+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.467473+0000 mon.c (mon.1) 194 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.468044+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:33.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:33 vm07 bash[46158]: audit 2026-03-10T11:49:33.468044+0000 mon.c (mon.1) 195 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:34.277 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.278 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.278 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.278 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.278 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.278 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.279 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.279 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.279 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: Stopping Ceph osd.0 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:49:34.279 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:49:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:34.590 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:34 vm05 bash[25160]: debug 2026-03-10T11:49:34.317+0000 7f8033a12700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:49:34.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:34 vm05 bash[25160]: debug 2026-03-10T11:49:34.317+0000 7f8033a12700 -1 osd.0 109 *** Got signal Terminated *** 2026-03-10T11:49:34.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:34 vm05 bash[25160]: debug 2026-03-10T11:49:34.317+0000 7f8033a12700 -1 osd.0 109 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cluster 2026-03-10T11:49:33.446329+0000 mgr.y (mgr.44107) 184 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cluster 2026-03-10T11:49:33.446329+0000 mgr.y (mgr.44107) 184 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.460331+0000 mgr.y (mgr.44107) 185 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.460331+0000 mgr.y (mgr.44107) 185 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.469467+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cephadm 2026-03-10T11:49:33.469467+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cluster 2026-03-10T11:49:34.322415+0000 mon.a (mon.0) 257 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 bash[65415]: cluster 2026-03-10T11:49:34.322415+0000 mon.a (mon.0) 257 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cluster 2026-03-10T11:49:33.446329+0000 mgr.y (mgr.44107) 184 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cluster 2026-03-10T11:49:33.446329+0000 mgr.y (mgr.44107) 184 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.460331+0000 mgr.y (mgr.44107) 185 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.460331+0000 mgr.y (mgr.44107) 185 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.469467+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cephadm 2026-03-10T11:49:33.469467+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cluster 2026-03-10T11:49:34.322415+0000 mon.a (mon.0) 257 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T11:49:35.426 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 bash[68966]: cluster 2026-03-10T11:49:34.322415+0000 mon.a (mon.0) 257 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T11:49:35.426 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 bash[86083]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-0 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cluster 2026-03-10T11:49:33.446329+0000 mgr.y (mgr.44107) 184 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cluster 2026-03-10T11:49:33.446329+0000 mgr.y (mgr.44107) 184 : cluster [DBG] pgmap v72: 161 pgs: 161 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.460331+0000 mgr.y (mgr.44107) 185 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.460331+0000 mgr.y (mgr.44107) 185 : cephadm [INF] Upgrade: Updating osd.0 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.469467+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cephadm 2026-03-10T11:49:33.469467+0000 mgr.y (mgr.44107) 186 : cephadm [INF] Deploying daemon osd.0 on vm05 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cluster 2026-03-10T11:49:34.322415+0000 mon.a (mon.0) 257 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T11:49:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:35 vm07 bash[46158]: cluster 2026-03-10T11:49:34.322415+0000 mon.a (mon.0) 257 : cluster [INF] osd.0 marked itself down and dead 2026-03-10T11:49:35.692 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.692 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.692 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.692 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.692 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.693 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.693 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.0.service: Deactivated successfully. 2026-03-10T11:49:35.693 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: Stopped Ceph osd.0 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:49:35.693 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.693 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:35.693 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:49:36.090 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 systemd[1]: Started Ceph osd.0 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:49:36.090 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 bash[86289]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:49:36.091 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:35 vm05 bash[86289]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: cluster 2026-03-10T11:49:35.075199+0000 mon.a (mon.0) 258 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: cluster 2026-03-10T11:49:35.075199+0000 mon.a (mon.0) 258 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: cluster 2026-03-10T11:49:35.081468+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: cluster 2026-03-10T11:49:35.081468+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.450256+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.450256+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.450507+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.450507+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.451093+0000 mon.c (mon.1) 197 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.451093+0000 mon.c (mon.1) 197 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.451294+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.451294+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.451990+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.451990+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.452248+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.452248+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.452554+0000 mon.c (mon.1) 199 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.452554+0000 mon.c (mon.1) 199 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.452776+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.452776+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.453583+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.453583+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.453995+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.453995+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.454373+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.454373+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.454598+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.454598+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.454956+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.454956+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.455167+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.455167+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.455488+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.455488+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.455698+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.455698+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456023+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456023+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456212+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456212+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456530+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456530+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456713+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.456713+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.495015+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.495015+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.501841+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.501841+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.723407+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.723407+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.731849+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.731849+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.736623+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.736623+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.738633+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:35.738633+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080929+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080929+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080964+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080964+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080982+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080982+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080999+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.080999+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081019+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081019+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081036+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081036+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081051+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081051+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081077+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081077+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081101+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081101+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081122+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: audit 2026-03-10T11:49:36.081122+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]': finished 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: cluster 2026-03-10T11:49:36.083329+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e111: 8 total, 7 up, 8 in 2026-03-10T11:49:36.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:36 vm07 bash[46158]: cluster 2026-03-10T11:49:36.083329+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e111: 8 total, 7 up, 8 in 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: cluster 2026-03-10T11:49:35.075199+0000 mon.a (mon.0) 258 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: cluster 2026-03-10T11:49:35.075199+0000 mon.a (mon.0) 258 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: cluster 2026-03-10T11:49:35.081468+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: cluster 2026-03-10T11:49:35.081468+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.450256+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.450256+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.450507+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.450507+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.451093+0000 mon.c (mon.1) 197 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.451093+0000 mon.c (mon.1) 197 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.451294+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.451294+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.451990+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.451990+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.452248+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.452248+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.452554+0000 mon.c (mon.1) 199 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.452554+0000 mon.c (mon.1) 199 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.452776+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.452776+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.453583+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.453583+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.453995+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.453995+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.454373+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.454373+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.454598+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.454598+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.454956+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.454956+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.455167+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.455167+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.455488+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.455488+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.455698+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.455698+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456023+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456023+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456212+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456212+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456530+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456530+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456713+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.456713+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.495015+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.495015+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.501841+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.501841+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.723407+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.723407+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.731849+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.731849+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.736623+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.736623+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.738633+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: cluster 2026-03-10T11:49:35.075199+0000 mon.a (mon.0) 258 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: cluster 2026-03-10T11:49:35.075199+0000 mon.a (mon.0) 258 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: cluster 2026-03-10T11:49:35.081468+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: cluster 2026-03-10T11:49:35.081468+0000 mon.a (mon.0) 259 : cluster [DBG] osdmap e110: 8 total, 7 up, 8 in 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.450256+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.450256+0000 mon.c (mon.1) 196 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.450507+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.450507+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.451093+0000 mon.c (mon.1) 197 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.451093+0000 mon.c (mon.1) 197 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.451294+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.451294+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.451990+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.451990+0000 mon.c (mon.1) 198 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.452248+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.452248+0000 mon.a (mon.0) 262 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.452554+0000 mon.c (mon.1) 199 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.452554+0000 mon.c (mon.1) 199 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.452776+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.452776+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.453583+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.453583+0000 mon.c (mon.1) 200 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.453995+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.453995+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.454373+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.454373+0000 mon.c (mon.1) 201 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.454598+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.454598+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.454956+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.454956+0000 mon.c (mon.1) 202 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.455167+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.455167+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.455488+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.455488+0000 mon.c (mon.1) 203 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:35.738633+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080929+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080929+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080964+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080964+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080982+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080982+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080999+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.080999+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081019+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081019+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081036+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081036+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081051+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081051+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081077+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081077+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081101+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081101+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]': finished 2026-03-10T11:49:36.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081122+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: audit 2026-03-10T11:49:36.081122+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: cluster 2026-03-10T11:49:36.083329+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e111: 8 total, 7 up, 8 in 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:36 vm05 bash[65415]: cluster 2026-03-10T11:49:36.083329+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e111: 8 total, 7 up, 8 in 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.455698+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.455698+0000 mon.a (mon.0) 267 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456023+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456023+0000 mon.c (mon.1) 204 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456212+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456212+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456530+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456530+0000 mon.c (mon.1) 205 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456713+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.456713+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.495015+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.495015+0000 mon.a (mon.0) 270 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.501841+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.501841+0000 mon.c (mon.1) 206 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.723407+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.723407+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.731849+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.731849+0000 mon.a (mon.0) 272 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.736623+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.736623+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.738633+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:35.738633+0000 mon.c (mon.1) 207 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080929+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080929+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.4", "id": [7, 2]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080964+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080964+0000 mon.a (mon.0) 275 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.e", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080982+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080982+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.f", "id": [7, 2]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080999+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.080999+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.18", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081019+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081019+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.1d", "id": [7, 4]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081036+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081036+0000 mon.a (mon.0) 279 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.0", "id": [3, 1]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081051+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081051+0000 mon.a (mon.0) 280 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.2", "id": [3, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081077+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081077+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.7", "id": [5, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081101+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081101+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.c", "id": [3, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081122+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: audit 2026-03-10T11:49:36.081122+0000 mon.a (mon.0) 283 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "6.15", "id": [7, 0]}]': finished 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: cluster 2026-03-10T11:49:36.083329+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e111: 8 total, 7 up, 8 in 2026-03-10T11:49:36.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:36 vm05 bash[68966]: cluster 2026-03-10T11:49:36.083329+0000 mon.a (mon.0) 284 : cluster [DBG] osdmap e111: 8 total, 7 up, 8 in 2026-03-10T11:49:37.090 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:36 vm05 bash[86289]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:49:37.091 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:36 vm05 bash[86289]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:49:37.091 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:36 vm05 bash[86289]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:49:37.091 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:36 vm05 bash[86289]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T11:49:37.091 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:36 vm05 bash[86289]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a5f62153-ad20-4d9d-937d-ed5540282875/osd-block-0992e6dc-d298-462b-bccd-b74959342712 --path /var/lib/ceph/osd/ceph-0 --no-mon-config 2026-03-10T11:49:37.400 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:37 vm05 bash[65415]: cluster 2026-03-10T11:49:35.446781+0000 mgr.y (mgr.44107) 187 : cluster [DBG] pgmap v74: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:49:37.400 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:37 vm05 bash[65415]: cluster 2026-03-10T11:49:35.446781+0000 mgr.y (mgr.44107) 187 : cluster [DBG] pgmap v74: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:49:37.400 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:37 vm05 bash[68966]: cluster 2026-03-10T11:49:35.446781+0000 mgr.y (mgr.44107) 187 : cluster [DBG] pgmap v74: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:49:37.400 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:37 vm05 bash[68966]: cluster 2026-03-10T11:49:35.446781+0000 mgr.y (mgr.44107) 187 : cluster [DBG] pgmap v74: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:49:37.400 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:37 vm05 bash[86289]: Running command: /usr/bin/ln -snf /dev/ceph-a5f62153-ad20-4d9d-937d-ed5540282875/osd-block-0992e6dc-d298-462b-bccd-b74959342712 /var/lib/ceph/osd/ceph-0/block 2026-03-10T11:49:37.400 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:37 vm05 bash[86289]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block 2026-03-10T11:49:37.400 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:37 vm05 bash[86289]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T11:49:37.400 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:37 vm05 bash[86289]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-10T11:49:37.400 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:37 vm05 bash[86289]: --> ceph-volume lvm activate successful for osd ID: 0 2026-03-10T11:49:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:37 vm07 bash[46158]: cluster 2026-03-10T11:49:35.446781+0000 mgr.y (mgr.44107) 187 : cluster [DBG] pgmap v74: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:49:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:37 vm07 bash[46158]: cluster 2026-03-10T11:49:35.446781+0000 mgr.y (mgr.44107) 187 : cluster [DBG] pgmap v74: 161 pgs: 21 stale+active+clean, 140 active+clean; 457 KiB data, 145 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:49:37.840 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:37 vm05 bash[86636]: debug 2026-03-10T11:49:37.433+0000 7fa543886640 1 -- 192.168.123.105:0/3179318301 <== mon.0 v2:192.168.123.105:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x563fef113680 con 0x563fef10c000 2026-03-10T11:49:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:38 vm07 bash[46158]: cluster 2026-03-10T11:49:37.131355+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T11:49:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:38 vm07 bash[46158]: cluster 2026-03-10T11:49:37.131355+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T11:49:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:38 vm07 bash[46158]: cluster 2026-03-10T11:49:37.447114+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v77: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:38 vm07 bash[46158]: cluster 2026-03-10T11:49:37.447114+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v77: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:38 vm07 bash[46158]: cluster 2026-03-10T11:49:38.133499+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T11:49:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:38 vm07 bash[46158]: cluster 2026-03-10T11:49:38.133499+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T11:49:38.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:38 vm05 bash[65415]: cluster 2026-03-10T11:49:37.131355+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:38 vm05 bash[65415]: cluster 2026-03-10T11:49:37.131355+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:38 vm05 bash[65415]: cluster 2026-03-10T11:49:37.447114+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v77: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:38 vm05 bash[65415]: cluster 2026-03-10T11:49:37.447114+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v77: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:38 vm05 bash[65415]: cluster 2026-03-10T11:49:38.133499+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:38 vm05 bash[65415]: cluster 2026-03-10T11:49:38.133499+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:38 vm05 bash[68966]: cluster 2026-03-10T11:49:37.131355+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:38 vm05 bash[68966]: cluster 2026-03-10T11:49:37.131355+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e112: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:38 vm05 bash[68966]: cluster 2026-03-10T11:49:37.447114+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v77: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:38 vm05 bash[68966]: cluster 2026-03-10T11:49:37.447114+0000 mgr.y (mgr.44107) 188 : cluster [DBG] pgmap v77: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:38 vm05 bash[68966]: cluster 2026-03-10T11:49:38.133499+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:38 vm05 bash[68966]: cluster 2026-03-10T11:49:38.133499+0000 mon.a (mon.0) 286 : cluster [DBG] osdmap e113: 8 total, 7 up, 8 in 2026-03-10T11:49:38.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:38 vm05 bash[86636]: debug 2026-03-10T11:49:38.137+0000 7fa5460f0740 -1 Falling back to public interface 2026-03-10T11:49:39.152 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:49:38] "GET /metrics HTTP/1.1" 200 37734 "" "Prometheus/2.51.0" 2026-03-10T11:49:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:39 vm07 bash[46158]: cluster 2026-03-10T11:49:38.144833+0000 mon.a (mon.0) 287 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T11:49:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:39 vm07 bash[46158]: cluster 2026-03-10T11:49:38.144833+0000 mon.a (mon.0) 287 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T11:49:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:39 vm07 bash[46158]: cluster 2026-03-10T11:49:38.144858+0000 mon.a (mon.0) 288 : cluster [WRN] Health check failed: Degraded data redundancy: 50/627 objects degraded (7.974%), 14 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:39 vm07 bash[46158]: cluster 2026-03-10T11:49:38.144858+0000 mon.a (mon.0) 288 : cluster [WRN] Health check failed: Degraded data redundancy: 50/627 objects degraded (7.974%), 14 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:39 vm07 bash[46158]: audit 2026-03-10T11:49:39.087999+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:39 vm07 bash[46158]: audit 2026-03-10T11:49:39.087999+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:39.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:39 vm05 bash[65415]: cluster 2026-03-10T11:49:38.144833+0000 mon.a (mon.0) 287 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:39 vm05 bash[65415]: cluster 2026-03-10T11:49:38.144833+0000 mon.a (mon.0) 287 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:39 vm05 bash[65415]: cluster 2026-03-10T11:49:38.144858+0000 mon.a (mon.0) 288 : cluster [WRN] Health check failed: Degraded data redundancy: 50/627 objects degraded (7.974%), 14 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:39 vm05 bash[65415]: cluster 2026-03-10T11:49:38.144858+0000 mon.a (mon.0) 288 : cluster [WRN] Health check failed: Degraded data redundancy: 50/627 objects degraded (7.974%), 14 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:39 vm05 bash[65415]: audit 2026-03-10T11:49:39.087999+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:39 vm05 bash[65415]: audit 2026-03-10T11:49:39.087999+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:39 vm05 bash[68966]: cluster 2026-03-10T11:49:38.144833+0000 mon.a (mon.0) 287 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:39 vm05 bash[68966]: cluster 2026-03-10T11:49:38.144833+0000 mon.a (mon.0) 287 : cluster [WRN] Health check failed: Reduced data availability: 2 pgs inactive, 3 pgs peering (PG_AVAILABILITY) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:39 vm05 bash[68966]: cluster 2026-03-10T11:49:38.144858+0000 mon.a (mon.0) 288 : cluster [WRN] Health check failed: Degraded data redundancy: 50/627 objects degraded (7.974%), 14 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:39 vm05 bash[68966]: cluster 2026-03-10T11:49:38.144858+0000 mon.a (mon.0) 288 : cluster [WRN] Health check failed: Degraded data redundancy: 50/627 objects degraded (7.974%), 14 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:39 vm05 bash[68966]: audit 2026-03-10T11:49:39.087999+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:39.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:39 vm05 bash[68966]: audit 2026-03-10T11:49:39.087999+0000 mgr.y (mgr.44107) 189 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:39.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:39 vm05 bash[86636]: debug 2026-03-10T11:49:39.341+0000 7fa5460f0740 -1 osd.0 0 read_superblock omap replica is missing. 2026-03-10T11:49:39.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:39 vm05 bash[86636]: debug 2026-03-10T11:49:39.353+0000 7fa5460f0740 -1 osd.0 109 log_to_monitors true 2026-03-10T11:49:40.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:40 vm05 bash[65415]: audit 2026-03-10T11:49:39.360311+0000 mon.a (mon.0) 289 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:49:40.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:40 vm05 bash[65415]: audit 2026-03-10T11:49:39.360311+0000 mon.a (mon.0) 289 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:49:40.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:40 vm05 bash[65415]: cluster 2026-03-10T11:49:39.447529+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v79: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:40.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:40 vm05 bash[65415]: cluster 2026-03-10T11:49:39.447529+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v79: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:40.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:40 vm05 bash[68966]: audit 2026-03-10T11:49:39.360311+0000 mon.a (mon.0) 289 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:49:40.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:40 vm05 bash[68966]: audit 2026-03-10T11:49:39.360311+0000 mon.a (mon.0) 289 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:49:40.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:40 vm05 bash[68966]: cluster 2026-03-10T11:49:39.447529+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v79: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:40.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:40 vm05 bash[68966]: cluster 2026-03-10T11:49:39.447529+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v79: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:40.591 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:49:40 vm05 bash[86636]: debug 2026-03-10T11:49:40.361+0000 7fa53de9b640 -1 osd.0 109 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:49:40.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:40 vm07 bash[46158]: audit 2026-03-10T11:49:39.360311+0000 mon.a (mon.0) 289 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:49:40.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:40 vm07 bash[46158]: audit 2026-03-10T11:49:39.360311+0000 mon.a (mon.0) 289 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-10T11:49:40.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:40 vm07 bash[46158]: cluster 2026-03-10T11:49:39.447529+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v79: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:40.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:40 vm07 bash[46158]: cluster 2026-03-10T11:49:39.447529+0000 mgr.y (mgr.44107) 190 : cluster [DBG] pgmap v79: 161 pgs: 29 active+undersized, 19 peering, 14 active+undersized+degraded, 99 active+clean; 457 KiB data, 146 MiB used, 160 GiB / 160 GiB avail; 50/627 objects degraded (7.974%) 2026-03-10T11:49:41.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:41 vm07 bash[46158]: audit 2026-03-10T11:49:40.339491+0000 mon.a (mon.0) 290 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:49:41.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:41 vm07 bash[46158]: audit 2026-03-10T11:49:40.339491+0000 mon.a (mon.0) 290 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:49:41.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:41 vm07 bash[46158]: cluster 2026-03-10T11:49:40.341493+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T11:49:41.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:41 vm07 bash[46158]: cluster 2026-03-10T11:49:40.341493+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T11:49:41.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:41 vm07 bash[46158]: audit 2026-03-10T11:49:40.342212+0000 mon.a (mon.0) 292 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:49:41.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:41 vm07 bash[46158]: audit 2026-03-10T11:49:40.342212+0000 mon.a (mon.0) 292 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:49:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:41 vm05 bash[65415]: audit 2026-03-10T11:49:40.339491+0000 mon.a (mon.0) 290 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:49:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:41 vm05 bash[65415]: audit 2026-03-10T11:49:40.339491+0000 mon.a (mon.0) 290 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:49:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:41 vm05 bash[65415]: cluster 2026-03-10T11:49:40.341493+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T11:49:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:41 vm05 bash[65415]: cluster 2026-03-10T11:49:40.341493+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T11:49:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:41 vm05 bash[65415]: audit 2026-03-10T11:49:40.342212+0000 mon.a (mon.0) 292 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:49:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:41 vm05 bash[65415]: audit 2026-03-10T11:49:40.342212+0000 mon.a (mon.0) 292 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:49:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:41 vm05 bash[68966]: audit 2026-03-10T11:49:40.339491+0000 mon.a (mon.0) 290 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:49:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:41 vm05 bash[68966]: audit 2026-03-10T11:49:40.339491+0000 mon.a (mon.0) 290 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-10T11:49:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:41 vm05 bash[68966]: cluster 2026-03-10T11:49:40.341493+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T11:49:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:41 vm05 bash[68966]: cluster 2026-03-10T11:49:40.341493+0000 mon.a (mon.0) 291 : cluster [DBG] osdmap e114: 8 total, 7 up, 8 in 2026-03-10T11:49:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:41 vm05 bash[68966]: audit 2026-03-10T11:49:40.342212+0000 mon.a (mon.0) 292 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:49:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:41 vm05 bash[68966]: audit 2026-03-10T11:49:40.342212+0000 mon.a (mon.0) 292 : audit [INF] from='osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:49:42.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.339991+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.339991+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.345352+0000 mon.a (mon.0) 294 : cluster [INF] osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067] boot 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.345352+0000 mon.a (mon.0) 294 : cluster [INF] osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067] boot 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.345457+0000 mon.a (mon.0) 295 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.345457+0000 mon.a (mon.0) 295 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: audit 2026-03-10T11:49:41.345687+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: audit 2026-03-10T11:49:41.345687+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.447917+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v82: 161 pgs: 1 active+clean+remapped, 35 active+undersized, 8 peering, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 164 MiB used, 160 GiB / 160 GiB avail; 66/627 objects degraded (10.526%) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:41.447917+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v82: 161 pgs: 1 active+clean+remapped, 35 active+undersized, 8 peering, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 164 MiB used, 160 GiB / 160 GiB avail; 66/627 objects degraded (10.526%) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: audit 2026-03-10T11:49:42.111412+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: audit 2026-03-10T11:49:42.111412+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: audit 2026-03-10T11:49:42.207056+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: audit 2026-03-10T11:49:42.207056+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:42.371702+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:42 vm05 bash[68966]: cluster 2026-03-10T11:49:42.371702+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.339991+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.339991+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.345352+0000 mon.a (mon.0) 294 : cluster [INF] osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067] boot 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.345352+0000 mon.a (mon.0) 294 : cluster [INF] osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067] boot 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.345457+0000 mon.a (mon.0) 295 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.345457+0000 mon.a (mon.0) 295 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: audit 2026-03-10T11:49:41.345687+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: audit 2026-03-10T11:49:41.345687+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.447917+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v82: 161 pgs: 1 active+clean+remapped, 35 active+undersized, 8 peering, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 164 MiB used, 160 GiB / 160 GiB avail; 66/627 objects degraded (10.526%) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:41.447917+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v82: 161 pgs: 1 active+clean+remapped, 35 active+undersized, 8 peering, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 164 MiB used, 160 GiB / 160 GiB avail; 66/627 objects degraded (10.526%) 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: audit 2026-03-10T11:49:42.111412+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: audit 2026-03-10T11:49:42.111412+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: audit 2026-03-10T11:49:42.207056+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: audit 2026-03-10T11:49:42.207056+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:42.371702+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T11:49:42.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:42 vm05 bash[65415]: cluster 2026-03-10T11:49:42.371702+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.339991+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.339991+0000 mon.a (mon.0) 293 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.345352+0000 mon.a (mon.0) 294 : cluster [INF] osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067] boot 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.345352+0000 mon.a (mon.0) 294 : cluster [INF] osd.0 [v2:192.168.123.105:6802/1356388067,v1:192.168.123.105:6803/1356388067] boot 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.345457+0000 mon.a (mon.0) 295 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.345457+0000 mon.a (mon.0) 295 : cluster [DBG] osdmap e115: 8 total, 8 up, 8 in 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: audit 2026-03-10T11:49:41.345687+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: audit 2026-03-10T11:49:41.345687+0000 mon.c (mon.1) 208 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.447917+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v82: 161 pgs: 1 active+clean+remapped, 35 active+undersized, 8 peering, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 164 MiB used, 160 GiB / 160 GiB avail; 66/627 objects degraded (10.526%) 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:41.447917+0000 mgr.y (mgr.44107) 191 : cluster [DBG] pgmap v82: 161 pgs: 1 active+clean+remapped, 35 active+undersized, 8 peering, 19 active+undersized+degraded, 98 active+clean; 457 KiB data, 164 MiB used, 160 GiB / 160 GiB avail; 66/627 objects degraded (10.526%) 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: audit 2026-03-10T11:49:42.111412+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: audit 2026-03-10T11:49:42.111412+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: audit 2026-03-10T11:49:42.207056+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: audit 2026-03-10T11:49:42.207056+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:42.371702+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T11:49:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:42 vm07 bash[46158]: cluster 2026-03-10T11:49:42.371702+0000 mon.a (mon.0) 298 : cluster [DBG] osdmap e116: 8 total, 8 up, 8 in 2026-03-10T11:49:44.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:43 vm07 bash[46158]: audit 2026-03-10T11:49:42.915240+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:43 vm07 bash[46158]: audit 2026-03-10T11:49:42.915240+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:43 vm07 bash[46158]: audit 2026-03-10T11:49:42.923488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:43 vm07 bash[46158]: audit 2026-03-10T11:49:42.923488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:43 vm07 bash[46158]: cluster 2026-03-10T11:49:43.559332+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T11:49:44.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:43 vm07 bash[46158]: cluster 2026-03-10T11:49:43.559332+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T11:49:44.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:43 vm05 bash[65415]: audit 2026-03-10T11:49:42.915240+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:43 vm05 bash[65415]: audit 2026-03-10T11:49:42.915240+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:43 vm05 bash[65415]: audit 2026-03-10T11:49:42.923488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:43 vm05 bash[65415]: audit 2026-03-10T11:49:42.923488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:43 vm05 bash[65415]: cluster 2026-03-10T11:49:43.559332+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:43 vm05 bash[65415]: cluster 2026-03-10T11:49:43.559332+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:43 vm05 bash[68966]: audit 2026-03-10T11:49:42.915240+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:43 vm05 bash[68966]: audit 2026-03-10T11:49:42.915240+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:43 vm05 bash[68966]: audit 2026-03-10T11:49:42.923488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:43 vm05 bash[68966]: audit 2026-03-10T11:49:42.923488+0000 mon.a (mon.0) 300 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:43 vm05 bash[68966]: cluster 2026-03-10T11:49:43.559332+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T11:49:44.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:43 vm05 bash[68966]: cluster 2026-03-10T11:49:43.559332+0000 mon.a (mon.0) 301 : cluster [DBG] osdmap e117: 8 total, 8 up, 8 in 2026-03-10T11:49:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:44 vm07 bash[46158]: cluster 2026-03-10T11:49:43.448270+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v84: 161 pgs: 9 remapped+peering, 1 active+clean+remapped, 17 active+undersized, 24 peering, 13 active+undersized+degraded, 97 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 45/627 objects degraded (7.177%); 1 B/s, 0 objects/s recovering 2026-03-10T11:49:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:44 vm07 bash[46158]: cluster 2026-03-10T11:49:43.448270+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v84: 161 pgs: 9 remapped+peering, 1 active+clean+remapped, 17 active+undersized, 24 peering, 13 active+undersized+degraded, 97 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 45/627 objects degraded (7.177%); 1 B/s, 0 objects/s recovering 2026-03-10T11:49:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:44 vm07 bash[46158]: cluster 2026-03-10T11:49:44.499290+0000 mon.a (mon.0) 302 : cluster [WRN] Health check update: Degraded data redundancy: 45/627 objects degraded (7.177%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:44 vm07 bash[46158]: cluster 2026-03-10T11:49:44.499290+0000 mon.a (mon.0) 302 : cluster [WRN] Health check update: Degraded data redundancy: 45/627 objects degraded (7.177%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:44 vm07 bash[46158]: cluster 2026-03-10T11:49:44.499316+0000 mon.a (mon.0) 303 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-10T11:49:45.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:44 vm07 bash[46158]: cluster 2026-03-10T11:49:44.499316+0000 mon.a (mon.0) 303 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:44 vm05 bash[65415]: cluster 2026-03-10T11:49:43.448270+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v84: 161 pgs: 9 remapped+peering, 1 active+clean+remapped, 17 active+undersized, 24 peering, 13 active+undersized+degraded, 97 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 45/627 objects degraded (7.177%); 1 B/s, 0 objects/s recovering 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:44 vm05 bash[65415]: cluster 2026-03-10T11:49:43.448270+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v84: 161 pgs: 9 remapped+peering, 1 active+clean+remapped, 17 active+undersized, 24 peering, 13 active+undersized+degraded, 97 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 45/627 objects degraded (7.177%); 1 B/s, 0 objects/s recovering 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:44 vm05 bash[65415]: cluster 2026-03-10T11:49:44.499290+0000 mon.a (mon.0) 302 : cluster [WRN] Health check update: Degraded data redundancy: 45/627 objects degraded (7.177%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:44 vm05 bash[65415]: cluster 2026-03-10T11:49:44.499290+0000 mon.a (mon.0) 302 : cluster [WRN] Health check update: Degraded data redundancy: 45/627 objects degraded (7.177%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:44 vm05 bash[65415]: cluster 2026-03-10T11:49:44.499316+0000 mon.a (mon.0) 303 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:44 vm05 bash[65415]: cluster 2026-03-10T11:49:44.499316+0000 mon.a (mon.0) 303 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:44 vm05 bash[68966]: cluster 2026-03-10T11:49:43.448270+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v84: 161 pgs: 9 remapped+peering, 1 active+clean+remapped, 17 active+undersized, 24 peering, 13 active+undersized+degraded, 97 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 45/627 objects degraded (7.177%); 1 B/s, 0 objects/s recovering 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:44 vm05 bash[68966]: cluster 2026-03-10T11:49:43.448270+0000 mgr.y (mgr.44107) 192 : cluster [DBG] pgmap v84: 161 pgs: 9 remapped+peering, 1 active+clean+remapped, 17 active+undersized, 24 peering, 13 active+undersized+degraded, 97 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 578 B/s rd, 0 op/s; 45/627 objects degraded (7.177%); 1 B/s, 0 objects/s recovering 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:44 vm05 bash[68966]: cluster 2026-03-10T11:49:44.499290+0000 mon.a (mon.0) 302 : cluster [WRN] Health check update: Degraded data redundancy: 45/627 objects degraded (7.177%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:44 vm05 bash[68966]: cluster 2026-03-10T11:49:44.499290+0000 mon.a (mon.0) 302 : cluster [WRN] Health check update: Degraded data redundancy: 45/627 objects degraded (7.177%), 13 pgs degraded (PG_DEGRADED) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:44 vm05 bash[68966]: cluster 2026-03-10T11:49:44.499316+0000 mon.a (mon.0) 303 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-10T11:49:45.282 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:44 vm05 bash[68966]: cluster 2026-03-10T11:49:44.499316+0000 mon.a (mon.0) 303 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg inactive, 2 pgs peering) 2026-03-10T11:49:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:46 vm07 bash[46158]: cluster 2026-03-10T11:49:45.448619+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v86: 161 pgs: 6 remapped+peering, 5 active+undersized, 24 peering, 5 active+undersized+degraded, 121 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s rd, 11 op/s; 16/627 objects degraded (2.552%); 78 B/s, 0 objects/s recovering 2026-03-10T11:49:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:46 vm07 bash[46158]: cluster 2026-03-10T11:49:45.448619+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v86: 161 pgs: 6 remapped+peering, 5 active+undersized, 24 peering, 5 active+undersized+degraded, 121 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s rd, 11 op/s; 16/627 objects degraded (2.552%); 78 B/s, 0 objects/s recovering 2026-03-10T11:49:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:46 vm05 bash[65415]: cluster 2026-03-10T11:49:45.448619+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v86: 161 pgs: 6 remapped+peering, 5 active+undersized, 24 peering, 5 active+undersized+degraded, 121 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s rd, 11 op/s; 16/627 objects degraded (2.552%); 78 B/s, 0 objects/s recovering 2026-03-10T11:49:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:46 vm05 bash[65415]: cluster 2026-03-10T11:49:45.448619+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v86: 161 pgs: 6 remapped+peering, 5 active+undersized, 24 peering, 5 active+undersized+degraded, 121 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s rd, 11 op/s; 16/627 objects degraded (2.552%); 78 B/s, 0 objects/s recovering 2026-03-10T11:49:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:46 vm05 bash[68966]: cluster 2026-03-10T11:49:45.448619+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v86: 161 pgs: 6 remapped+peering, 5 active+undersized, 24 peering, 5 active+undersized+degraded, 121 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s rd, 11 op/s; 16/627 objects degraded (2.552%); 78 B/s, 0 objects/s recovering 2026-03-10T11:49:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:46 vm05 bash[68966]: cluster 2026-03-10T11:49:45.448619+0000 mgr.y (mgr.44107) 193 : cluster [DBG] pgmap v86: 161 pgs: 6 remapped+peering, 5 active+undersized, 24 peering, 5 active+undersized+degraded, 121 active+clean; 457 KiB data, 165 MiB used, 160 GiB / 160 GiB avail; 11 KiB/s rd, 11 op/s; 16/627 objects degraded (2.552%); 78 B/s, 0 objects/s recovering 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:47 vm05 bash[65415]: cluster 2026-03-10T11:49:47.937161+0000 mon.a (mon.0) 304 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/627 objects degraded (2.552%), 5 pgs degraded) 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:47 vm05 bash[65415]: cluster 2026-03-10T11:49:47.937161+0000 mon.a (mon.0) 304 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/627 objects degraded (2.552%), 5 pgs degraded) 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:47 vm05 bash[65415]: cluster 2026-03-10T11:49:47.937237+0000 mon.a (mon.0) 305 : cluster [INF] Cluster is now healthy 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:47 vm05 bash[65415]: cluster 2026-03-10T11:49:47.937237+0000 mon.a (mon.0) 305 : cluster [INF] Cluster is now healthy 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:47 vm05 bash[68966]: cluster 2026-03-10T11:49:47.937161+0000 mon.a (mon.0) 304 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/627 objects degraded (2.552%), 5 pgs degraded) 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:47 vm05 bash[68966]: cluster 2026-03-10T11:49:47.937161+0000 mon.a (mon.0) 304 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/627 objects degraded (2.552%), 5 pgs degraded) 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:47 vm05 bash[68966]: cluster 2026-03-10T11:49:47.937237+0000 mon.a (mon.0) 305 : cluster [INF] Cluster is now healthy 2026-03-10T11:49:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:47 vm05 bash[68966]: cluster 2026-03-10T11:49:47.937237+0000 mon.a (mon.0) 305 : cluster [INF] Cluster is now healthy 2026-03-10T11:49:48.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:47 vm07 bash[46158]: cluster 2026-03-10T11:49:47.937161+0000 mon.a (mon.0) 304 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/627 objects degraded (2.552%), 5 pgs degraded) 2026-03-10T11:49:48.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:47 vm07 bash[46158]: cluster 2026-03-10T11:49:47.937161+0000 mon.a (mon.0) 304 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 16/627 objects degraded (2.552%), 5 pgs degraded) 2026-03-10T11:49:48.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:47 vm07 bash[46158]: cluster 2026-03-10T11:49:47.937237+0000 mon.a (mon.0) 305 : cluster [INF] Cluster is now healthy 2026-03-10T11:49:48.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:47 vm07 bash[46158]: cluster 2026-03-10T11:49:47.937237+0000 mon.a (mon.0) 305 : cluster [INF] Cluster is now healthy 2026-03-10T11:49:49.027 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: cluster 2026-03-10T11:49:47.449055+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v87: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 9.5 KiB/s rd, 9 op/s; 131 B/s, 0 objects/s recovering 2026-03-10T11:49:49.027 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: cluster 2026-03-10T11:49:47.449055+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v87: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 9.5 KiB/s rd, 9 op/s; 131 B/s, 0 objects/s recovering 2026-03-10T11:49:49.027 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.554652+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.027 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.554652+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.027 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.560497+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.027 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.560497+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.562645+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.562645+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.563724+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.563724+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.568036+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.568036+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.609866+0000 mon.c (mon.1) 211 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.609866+0000 mon.c (mon.1) 211 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.611175+0000 mon.c (mon.1) 212 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.611175+0000 mon.c (mon.1) 212 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.612603+0000 mon.c (mon.1) 213 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.612603+0000 mon.c (mon.1) 213 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.619414+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.619414+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.621408+0000 mon.c (mon.1) 214 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.621408+0000 mon.c (mon.1) 214 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.622458+0000 mon.c (mon.1) 215 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.622458+0000 mon.c (mon.1) 215 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.623401+0000 mon.c (mon.1) 216 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.623401+0000 mon.c (mon.1) 216 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.624326+0000 mon.c (mon.1) 217 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.624326+0000 mon.c (mon.1) 217 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.625162+0000 mon.c (mon.1) 218 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.625162+0000 mon.c (mon.1) 218 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.625970+0000 mon.c (mon.1) 219 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.625970+0000 mon.c (mon.1) 219 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.627222+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.627222+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.627371+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.627371+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.632770+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.632770+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.634506+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.634506+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.634657+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.634657+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.637350+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.637350+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.639191+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.639191+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.639346+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.639346+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.642953+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.642953+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.644656+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.644656+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.644818+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.644818+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.645294+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.645294+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.645434+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.028 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.645434+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.645964+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.645964+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.646101+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.646101+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.646561+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.646561+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.646695+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.646695+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647176+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647176+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647312+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647312+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647812+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647812+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647957+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.647957+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.648471+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.648471+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.648605+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.648605+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649038+0000 mon.c (mon.1) 230 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649038+0000 mon.c (mon.1) 230 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649175+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649175+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649655+0000 mon.c (mon.1) 231 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649655+0000 mon.c (mon.1) 231 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649788+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.649788+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.653465+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.653465+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.655598+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.655598+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.655809+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.655809+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.656364+0000 mon.c (mon.1) 233 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.656364+0000 mon.c (mon.1) 233 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.656509+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.656509+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.657185+0000 mon.c (mon.1) 234 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.657185+0000 mon.c (mon.1) 234 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.657381+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.657381+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.657915+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.657915+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.658328+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.658328+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.658837+0000 mon.c (mon.1) 236 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.658837+0000 mon.c (mon.1) 236 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.659002+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.659002+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.659555+0000 mon.c (mon.1) 237 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.659555+0000 mon.c (mon.1) 237 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.659926+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.659926+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:48 vm05 bash[65415]: audit 2026-03-10T11:49:48.660733+0000 mon.c (mon.1) 238 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.660733+0000 mon.c (mon.1) 238 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.660920+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.660920+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.664766+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.664766+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.665302+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.665302+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.965434+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.029 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.965434+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.966025+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.966025+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.971045+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: audit 2026-03-10T11:49:48.971045+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: cluster 2026-03-10T11:49:47.449055+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v87: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 9.5 KiB/s rd, 9 op/s; 131 B/s, 0 objects/s recovering 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: cluster 2026-03-10T11:49:47.449055+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v87: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 9.5 KiB/s rd, 9 op/s; 131 B/s, 0 objects/s recovering 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.554652+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.554652+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.560497+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.560497+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.562645+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.562645+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.563724+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.563724+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.568036+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.568036+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.609866+0000 mon.c (mon.1) 211 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.609866+0000 mon.c (mon.1) 211 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.611175+0000 mon.c (mon.1) 212 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.611175+0000 mon.c (mon.1) 212 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.612603+0000 mon.c (mon.1) 213 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.612603+0000 mon.c (mon.1) 213 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.619414+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.619414+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.621408+0000 mon.c (mon.1) 214 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.621408+0000 mon.c (mon.1) 214 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.622458+0000 mon.c (mon.1) 215 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.622458+0000 mon.c (mon.1) 215 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.623401+0000 mon.c (mon.1) 216 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.623401+0000 mon.c (mon.1) 216 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.624326+0000 mon.c (mon.1) 217 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.624326+0000 mon.c (mon.1) 217 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.625162+0000 mon.c (mon.1) 218 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.625162+0000 mon.c (mon.1) 218 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.625970+0000 mon.c (mon.1) 219 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.625970+0000 mon.c (mon.1) 219 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.627222+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.627222+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.627371+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.627371+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.632770+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.632770+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.634506+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.634506+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.634657+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.634657+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.637350+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.637350+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.639191+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.639191+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.639346+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.639346+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.642953+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.642953+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:49:49.030 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.644656+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.644656+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.644818+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.644818+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.645294+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.645294+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.645434+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.645434+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.645964+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.645964+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.646101+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.646101+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.646561+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.646561+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.646695+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.646695+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647176+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647176+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647312+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647312+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647812+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647812+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647957+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.647957+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.648471+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.648471+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.648605+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.648605+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649038+0000 mon.c (mon.1) 230 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649038+0000 mon.c (mon.1) 230 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649175+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649175+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649655+0000 mon.c (mon.1) 231 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649655+0000 mon.c (mon.1) 231 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649788+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.649788+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.653465+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.653465+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.655598+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.655598+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.655809+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.655809+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.656364+0000 mon.c (mon.1) 233 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.656364+0000 mon.c (mon.1) 233 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.656509+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.656509+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.657185+0000 mon.c (mon.1) 234 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.657185+0000 mon.c (mon.1) 234 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.657381+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.657381+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.657915+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.657915+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.658328+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.658328+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.658837+0000 mon.c (mon.1) 236 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.658837+0000 mon.c (mon.1) 236 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.031 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:49:48] "GET /metrics HTTP/1.1" 200 37734 "" "Prometheus/2.51.0" 2026-03-10T11:49:49.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.659002+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.659002+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.659555+0000 mon.c (mon.1) 237 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.659555+0000 mon.c (mon.1) 237 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.659926+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.659926+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.660733+0000 mon.c (mon.1) 238 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.660733+0000 mon.c (mon.1) 238 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.660920+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.660920+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.664766+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.664766+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.665302+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.665302+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.965434+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.965434+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.966025+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.966025+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.971045+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: audit 2026-03-10T11:49:48.971045+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: cluster 2026-03-10T11:49:47.449055+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v87: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 9.5 KiB/s rd, 9 op/s; 131 B/s, 0 objects/s recovering 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: cluster 2026-03-10T11:49:47.449055+0000 mgr.y (mgr.44107) 194 : cluster [DBG] pgmap v87: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 9.5 KiB/s rd, 9 op/s; 131 B/s, 0 objects/s recovering 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.554652+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.554652+0000 mon.a (mon.0) 306 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.560497+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.560497+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.562645+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.562645+0000 mon.c (mon.1) 209 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.563724+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.563724+0000 mon.c (mon.1) 210 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.568036+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.568036+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.609866+0000 mon.c (mon.1) 211 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.609866+0000 mon.c (mon.1) 211 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.611175+0000 mon.c (mon.1) 212 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.611175+0000 mon.c (mon.1) 212 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.612603+0000 mon.c (mon.1) 213 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.612603+0000 mon.c (mon.1) 213 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.619414+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.619414+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.621408+0000 mon.c (mon.1) 214 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:48 vm07 bash[46158]: audit 2026-03-10T11:49:48.621408+0000 mon.c (mon.1) 214 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.622458+0000 mon.c (mon.1) 215 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.622458+0000 mon.c (mon.1) 215 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.623401+0000 mon.c (mon.1) 216 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.623401+0000 mon.c (mon.1) 216 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.624326+0000 mon.c (mon.1) 217 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.624326+0000 mon.c (mon.1) 217 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.625162+0000 mon.c (mon.1) 218 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.625162+0000 mon.c (mon.1) 218 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.625970+0000 mon.c (mon.1) 219 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.625970+0000 mon.c (mon.1) 219 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.627222+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.627222+0000 mon.c (mon.1) 220 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.627371+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.627371+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.632770+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.632770+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.634506+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.634506+0000 mon.c (mon.1) 221 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.634657+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.634657+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.637350+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.637350+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.639191+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.639191+0000 mon.c (mon.1) 222 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.639346+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.639346+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.642953+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.642953+0000 mon.a (mon.0) 315 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.644656+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.644656+0000 mon.c (mon.1) 223 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.644818+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.644818+0000 mon.a (mon.0) 316 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.645294+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.645294+0000 mon.c (mon.1) 224 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.645434+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.645434+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.645964+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.645964+0000 mon.c (mon.1) 225 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.646101+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.646101+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.646561+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.646561+0000 mon.c (mon.1) 226 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.646695+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.646695+0000 mon.a (mon.0) 319 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647176+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647176+0000 mon.c (mon.1) 227 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647312+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647312+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647812+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647812+0000 mon.c (mon.1) 228 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647957+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.647957+0000 mon.a (mon.0) 321 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.648471+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.648471+0000 mon.c (mon.1) 229 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.648605+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.648605+0000 mon.a (mon.0) 322 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649038+0000 mon.c (mon.1) 230 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649038+0000 mon.c (mon.1) 230 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649175+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649175+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649655+0000 mon.c (mon.1) 231 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649655+0000 mon.c (mon.1) 231 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649788+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.649788+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.653465+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.653465+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.655598+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.655598+0000 mon.c (mon.1) 232 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.655809+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.655809+0000 mon.a (mon.0) 326 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.656364+0000 mon.c (mon.1) 233 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.656364+0000 mon.c (mon.1) 233 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.656509+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.656509+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.657185+0000 mon.c (mon.1) 234 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.657185+0000 mon.c (mon.1) 234 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.657381+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.657381+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.657915+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.657915+0000 mon.c (mon.1) 235 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.658328+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.658328+0000 mon.a (mon.0) 329 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.658837+0000 mon.c (mon.1) 236 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.658837+0000 mon.c (mon.1) 236 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.659002+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.659002+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.659555+0000 mon.c (mon.1) 237 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.659555+0000 mon.c (mon.1) 237 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.659926+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.659926+0000 mon.a (mon.0) 331 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.660733+0000 mon.c (mon.1) 238 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.660733+0000 mon.c (mon.1) 238 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.660920+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.660920+0000 mon.a (mon.0) 332 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.664766+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.664766+0000 mon.a (mon.0) 333 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.665302+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.665302+0000 mon.c (mon.1) 239 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.965434+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.965434+0000 mon.c (mon.1) 240 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:49.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.966025+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.966025+0000 mon.c (mon.1) 241 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:49.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.971045+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:49.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: audit 2026-03-10T11:49:48.971045+0000 mon.a (mon.0) 334 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:49 vm05 bash[65415]: cephadm 2026-03-10T11:49:48.613020+0000 mgr.y (mgr.44107) 195 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: cephadm 2026-03-10T11:49:48.613020+0000 mgr.y (mgr.44107) 195 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: cephadm 2026-03-10T11:49:48.626498+0000 mgr.y (mgr.44107) 196 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: cephadm 2026-03-10T11:49:48.626498+0000 mgr.y (mgr.44107) 196 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: cephadm 2026-03-10T11:49:48.660375+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: cephadm 2026-03-10T11:49:48.660375+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.026161+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.026161+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.040347+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.040347+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.040930+0000 mon.c (mon.1) 244 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.040930+0000 mon.c (mon.1) 244 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.045894+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.045894+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.096427+0000 mgr.y (mgr.44107) 198 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:50 vm05 bash[65415]: audit 2026-03-10T11:49:49.096427+0000 mgr.y (mgr.44107) 198 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: cephadm 2026-03-10T11:49:48.613020+0000 mgr.y (mgr.44107) 195 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: cephadm 2026-03-10T11:49:48.613020+0000 mgr.y (mgr.44107) 195 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: cephadm 2026-03-10T11:49:48.626498+0000 mgr.y (mgr.44107) 196 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:49 vm05 bash[68966]: cephadm 2026-03-10T11:49:48.626498+0000 mgr.y (mgr.44107) 196 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: cephadm 2026-03-10T11:49:48.660375+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: cephadm 2026-03-10T11:49:48.660375+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.026161+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.026161+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.040347+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.040347+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.040930+0000 mon.c (mon.1) 244 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.040930+0000 mon.c (mon.1) 244 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.045894+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.045894+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.096427+0000 mgr.y (mgr.44107) 198 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:50.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:50 vm05 bash[68966]: audit 2026-03-10T11:49:49.096427+0000 mgr.y (mgr.44107) 198 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: cephadm 2026-03-10T11:49:48.613020+0000 mgr.y (mgr.44107) 195 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: cephadm 2026-03-10T11:49:48.613020+0000 mgr.y (mgr.44107) 195 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: cephadm 2026-03-10T11:49:48.626498+0000 mgr.y (mgr.44107) 196 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:49 vm07 bash[46158]: cephadm 2026-03-10T11:49:48.626498+0000 mgr.y (mgr.44107) 196 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: cephadm 2026-03-10T11:49:48.660375+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: cephadm 2026-03-10T11:49:48.660375+0000 mgr.y (mgr.44107) 197 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.026161+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.026161+0000 mon.c (mon.1) 242 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.040347+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.040347+0000 mon.c (mon.1) 243 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.040930+0000 mon.c (mon.1) 244 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.040930+0000 mon.c (mon.1) 244 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.045894+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.045894+0000 mon.a (mon.0) 335 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.096427+0000 mgr.y (mgr.44107) 198 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:50.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:50 vm07 bash[46158]: audit 2026-03-10T11:49:49.096427+0000 mgr.y (mgr.44107) 198 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: cluster 2026-03-10T11:49:49.449329+0000 mgr.y (mgr.44107) 199 : cluster [DBG] pgmap v88: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 7.2 KiB/s rd, 7 op/s; 100 B/s, 0 objects/s recovering 2026-03-10T11:49:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: cluster 2026-03-10T11:49:49.449329+0000 mgr.y (mgr.44107) 199 : cluster [DBG] pgmap v88: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 7.2 KiB/s rd, 7 op/s; 100 B/s, 0 objects/s recovering 2026-03-10T11:49:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: audit 2026-03-10T11:49:50.487991+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: audit 2026-03-10T11:49:50.487991+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: audit 2026-03-10T11:49:50.488868+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: audit 2026-03-10T11:49:50.488868+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: audit 2026-03-10T11:49:50.839421+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:51 vm05 bash[65415]: audit 2026-03-10T11:49:50.839421+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: cluster 2026-03-10T11:49:49.449329+0000 mgr.y (mgr.44107) 199 : cluster [DBG] pgmap v88: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 7.2 KiB/s rd, 7 op/s; 100 B/s, 0 objects/s recovering 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: cluster 2026-03-10T11:49:49.449329+0000 mgr.y (mgr.44107) 199 : cluster [DBG] pgmap v88: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 7.2 KiB/s rd, 7 op/s; 100 B/s, 0 objects/s recovering 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: audit 2026-03-10T11:49:50.487991+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: audit 2026-03-10T11:49:50.487991+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: audit 2026-03-10T11:49:50.488868+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: audit 2026-03-10T11:49:50.488868+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: audit 2026-03-10T11:49:50.839421+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:51 vm05 bash[68966]: audit 2026-03-10T11:49:50.839421+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: cluster 2026-03-10T11:49:49.449329+0000 mgr.y (mgr.44107) 199 : cluster [DBG] pgmap v88: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 7.2 KiB/s rd, 7 op/s; 100 B/s, 0 objects/s recovering 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: cluster 2026-03-10T11:49:49.449329+0000 mgr.y (mgr.44107) 199 : cluster [DBG] pgmap v88: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 7.2 KiB/s rd, 7 op/s; 100 B/s, 0 objects/s recovering 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: audit 2026-03-10T11:49:50.487991+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: audit 2026-03-10T11:49:50.487991+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: audit 2026-03-10T11:49:50.488868+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: audit 2026-03-10T11:49:50.488868+0000 mon.c (mon.1) 245 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: audit 2026-03-10T11:49:50.839421+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:51 vm07 bash[46158]: audit 2026-03-10T11:49:50.839421+0000 mon.a (mon.0) 337 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:49:53.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:53 vm07 bash[46158]: cluster 2026-03-10T11:49:51.449671+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v89: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.6 KiB/s rd, 6 op/s; 87 B/s, 0 objects/s recovering 2026-03-10T11:49:53.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:53 vm07 bash[46158]: cluster 2026-03-10T11:49:51.449671+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v89: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.6 KiB/s rd, 6 op/s; 87 B/s, 0 objects/s recovering 2026-03-10T11:49:53.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:53 vm05 bash[65415]: cluster 2026-03-10T11:49:51.449671+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v89: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.6 KiB/s rd, 6 op/s; 87 B/s, 0 objects/s recovering 2026-03-10T11:49:53.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:53 vm05 bash[65415]: cluster 2026-03-10T11:49:51.449671+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v89: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.6 KiB/s rd, 6 op/s; 87 B/s, 0 objects/s recovering 2026-03-10T11:49:53.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:53 vm05 bash[68966]: cluster 2026-03-10T11:49:51.449671+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v89: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.6 KiB/s rd, 6 op/s; 87 B/s, 0 objects/s recovering 2026-03-10T11:49:53.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:53 vm05 bash[68966]: cluster 2026-03-10T11:49:51.449671+0000 mgr.y (mgr.44107) 200 : cluster [DBG] pgmap v89: 161 pgs: 1 active+recovering, 160 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.6 KiB/s rd, 6 op/s; 87 B/s, 0 objects/s recovering 2026-03-10T11:49:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:54 vm07 bash[46158]: cluster 2026-03-10T11:49:53.450150+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.0 KiB/s rd, 5 op/s; 79 B/s, 0 objects/s recovering 2026-03-10T11:49:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:54 vm07 bash[46158]: cluster 2026-03-10T11:49:53.450150+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.0 KiB/s rd, 5 op/s; 79 B/s, 0 objects/s recovering 2026-03-10T11:49:54.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:54 vm05 bash[65415]: cluster 2026-03-10T11:49:53.450150+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.0 KiB/s rd, 5 op/s; 79 B/s, 0 objects/s recovering 2026-03-10T11:49:54.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:54 vm05 bash[65415]: cluster 2026-03-10T11:49:53.450150+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.0 KiB/s rd, 5 op/s; 79 B/s, 0 objects/s recovering 2026-03-10T11:49:54.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:54 vm05 bash[68966]: cluster 2026-03-10T11:49:53.450150+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.0 KiB/s rd, 5 op/s; 79 B/s, 0 objects/s recovering 2026-03-10T11:49:54.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:54 vm05 bash[68966]: cluster 2026-03-10T11:49:53.450150+0000 mgr.y (mgr.44107) 201 : cluster [DBG] pgmap v90: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 6.0 KiB/s rd, 5 op/s; 79 B/s, 0 objects/s recovering 2026-03-10T11:49:56.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:56 vm05 bash[65415]: cluster 2026-03-10T11:49:55.450492+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 5 op/s; 67 B/s, 0 objects/s recovering 2026-03-10T11:49:56.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:56 vm05 bash[65415]: cluster 2026-03-10T11:49:55.450492+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 5 op/s; 67 B/s, 0 objects/s recovering 2026-03-10T11:49:56.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:56 vm05 bash[68966]: cluster 2026-03-10T11:49:55.450492+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 5 op/s; 67 B/s, 0 objects/s recovering 2026-03-10T11:49:56.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:56 vm05 bash[68966]: cluster 2026-03-10T11:49:55.450492+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 5 op/s; 67 B/s, 0 objects/s recovering 2026-03-10T11:49:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:56 vm07 bash[46158]: cluster 2026-03-10T11:49:55.450492+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 5 op/s; 67 B/s, 0 objects/s recovering 2026-03-10T11:49:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:56 vm07 bash[46158]: cluster 2026-03-10T11:49:55.450492+0000 mgr.y (mgr.44107) 202 : cluster [DBG] pgmap v91: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 5.5 KiB/s rd, 5 op/s; 67 B/s, 0 objects/s recovering 2026-03-10T11:49:58.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:58 vm05 bash[65415]: cluster 2026-03-10T11:49:57.450948+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 33 B/s, 0 objects/s recovering 2026-03-10T11:49:58.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:58 vm05 bash[65415]: cluster 2026-03-10T11:49:57.450948+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 33 B/s, 0 objects/s recovering 2026-03-10T11:49:58.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:58 vm05 bash[68966]: cluster 2026-03-10T11:49:57.450948+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 33 B/s, 0 objects/s recovering 2026-03-10T11:49:58.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:58 vm05 bash[68966]: cluster 2026-03-10T11:49:57.450948+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 33 B/s, 0 objects/s recovering 2026-03-10T11:49:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:58 vm07 bash[46158]: cluster 2026-03-10T11:49:57.450948+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 33 B/s, 0 objects/s recovering 2026-03-10T11:49:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:58 vm07 bash[46158]: cluster 2026-03-10T11:49:57.450948+0000 mgr.y (mgr.44107) 203 : cluster [DBG] pgmap v92: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 33 B/s, 0 objects/s recovering 2026-03-10T11:49:59.098 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:49:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:49:58] "GET /metrics HTTP/1.1" 200 37780 "" "Prometheus/2.51.0" 2026-03-10T11:49:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:59 vm05 bash[65415]: audit 2026-03-10T11:49:59.102075+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:59.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:49:59 vm05 bash[65415]: audit 2026-03-10T11:49:59.102075+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:59.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:59 vm05 bash[68966]: audit 2026-03-10T11:49:59.102075+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:59.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:49:59 vm05 bash[68966]: audit 2026-03-10T11:49:59.102075+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:59 vm07 bash[46158]: audit 2026-03-10T11:49:59.102075+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:49:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:49:59 vm07 bash[46158]: audit 2026-03-10T11:49:59.102075+0000 mgr.y (mgr.44107) 204 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:00 vm05 bash[65415]: cluster 2026-03-10T11:49:59.451222+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:00 vm05 bash[65415]: cluster 2026-03-10T11:49:59.451222+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:00 vm05 bash[65415]: cluster 2026-03-10T11:50:00.001075+0000 mon.a (mon.0) 338 : cluster [INF] overall HEALTH_OK 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:00 vm05 bash[65415]: cluster 2026-03-10T11:50:00.001075+0000 mon.a (mon.0) 338 : cluster [INF] overall HEALTH_OK 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:00 vm05 bash[68966]: cluster 2026-03-10T11:49:59.451222+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:00 vm05 bash[68966]: cluster 2026-03-10T11:49:59.451222+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:00 vm05 bash[68966]: cluster 2026-03-10T11:50:00.001075+0000 mon.a (mon.0) 338 : cluster [INF] overall HEALTH_OK 2026-03-10T11:50:00.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:00 vm05 bash[68966]: cluster 2026-03-10T11:50:00.001075+0000 mon.a (mon.0) 338 : cluster [INF] overall HEALTH_OK 2026-03-10T11:50:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:00 vm07 bash[46158]: cluster 2026-03-10T11:49:59.451222+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:00 vm07 bash[46158]: cluster 2026-03-10T11:49:59.451222+0000 mgr.y (mgr.44107) 205 : cluster [DBG] pgmap v93: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:00 vm07 bash[46158]: cluster 2026-03-10T11:50:00.001075+0000 mon.a (mon.0) 338 : cluster [INF] overall HEALTH_OK 2026-03-10T11:50:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:00 vm07 bash[46158]: cluster 2026-03-10T11:50:00.001075+0000 mon.a (mon.0) 338 : cluster [INF] overall HEALTH_OK 2026-03-10T11:50:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:01 vm07 bash[46158]: audit 2026-03-10T11:50:00.849114+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:01 vm07 bash[46158]: audit 2026-03-10T11:50:00.849114+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:02.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:01 vm05 bash[68966]: audit 2026-03-10T11:50:00.849114+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:02.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:01 vm05 bash[68966]: audit 2026-03-10T11:50:00.849114+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:02.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:01 vm05 bash[65415]: audit 2026-03-10T11:50:00.849114+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:02.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:01 vm05 bash[65415]: audit 2026-03-10T11:50:00.849114+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:03.107 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:02 vm05 bash[65415]: cluster 2026-03-10T11:50:01.451528+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:03.107 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:02 vm05 bash[65415]: cluster 2026-03-10T11:50:01.451528+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:03.107 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:02 vm05 bash[68966]: cluster 2026-03-10T11:50:01.451528+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:03.107 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:02 vm05 bash[68966]: cluster 2026-03-10T11:50:01.451528+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:03.141 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:50:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:02 vm07 bash[46158]: cluster 2026-03-10T11:50:01.451528+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:02 vm07 bash[46158]: cluster 2026-03-10T11:50:01.451528+0000 mgr.y (mgr.44107) 206 : cluster [DBG] pgmap v94: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (16m) 21s ago 23m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (3m) 2m ago 23m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (4m) 21s ago 22m 44.0M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (4m) 2m ago 26m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (13m) 21s ago 26m 528M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (2m) 21s ago 26m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (3m) 2m ago 26m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (2m) 21s ago 26m 41.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (16m) 21s ago 23m 7988k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (16m) 2m ago 23m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (26s) 21s ago 25m 13.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (25m) 21s ago 25m 58.7M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (79s) 21s ago 25m 44.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (95s) 21s ago 25m 67.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (24m) 2m ago 24m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:50:03.576 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (24m) 2m ago 24m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:50:03.577 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (24m) 2m ago 24m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:50:03.577 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (24m) 2m ago 24m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:50:03.577 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (4m) 2m ago 23m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:50:03.577 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (22m) 21s ago 22m 89.0M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:50:03.577 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (22m) 2m ago 22m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:50:03.622 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | length == 2'"'"'' 2026-03-10T11:50:04.082 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:50:04.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:03 vm05 bash[65415]: audit 2026-03-10T11:50:03.079210+0000 mgr.y (mgr.44107) 207 : audit [DBG] from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:04.083 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:03 vm05 bash[65415]: audit 2026-03-10T11:50:03.079210+0000 mgr.y (mgr.44107) 207 : audit [DBG] from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:04.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:03 vm05 bash[68966]: audit 2026-03-10T11:50:03.079210+0000 mgr.y (mgr.44107) 207 : audit [DBG] from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:04.083 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:03 vm05 bash[68966]: audit 2026-03-10T11:50:03.079210+0000 mgr.y (mgr.44107) 207 : audit [DBG] from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:04.120 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade check quay.ceph.io/ceph-ci/ceph:$sha1 | jq -e '"'"'.up_to_date | length == 8'"'"'' 2026-03-10T11:50:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:03 vm07 bash[46158]: audit 2026-03-10T11:50:03.079210+0000 mgr.y (mgr.44107) 207 : audit [DBG] from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:03 vm07 bash[46158]: audit 2026-03-10T11:50:03.079210+0000 mgr.y (mgr.44107) 207 : audit [DBG] from='client.44299 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:04 vm07 bash[46158]: cluster 2026-03-10T11:50:03.451995+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:04 vm07 bash[46158]: cluster 2026-03-10T11:50:03.451995+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:04 vm07 bash[46158]: audit 2026-03-10T11:50:03.576893+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.54339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:04 vm07 bash[46158]: audit 2026-03-10T11:50:03.576893+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.54339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:04 vm07 bash[46158]: audit 2026-03-10T11:50:04.076930+0000 mon.c (mon.1) 246 : audit [DBG] from='client.? 192.168.123.105:0/673700795' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:04 vm07 bash[46158]: audit 2026-03-10T11:50:04.076930+0000 mon.c (mon.1) 246 : audit [DBG] from='client.? 192.168.123.105:0/673700795' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:04 vm05 bash[65415]: cluster 2026-03-10T11:50:03.451995+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:04 vm05 bash[65415]: cluster 2026-03-10T11:50:03.451995+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:04 vm05 bash[65415]: audit 2026-03-10T11:50:03.576893+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.54339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:04 vm05 bash[65415]: audit 2026-03-10T11:50:03.576893+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.54339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:04 vm05 bash[65415]: audit 2026-03-10T11:50:04.076930+0000 mon.c (mon.1) 246 : audit [DBG] from='client.? 192.168.123.105:0/673700795' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:04 vm05 bash[65415]: audit 2026-03-10T11:50:04.076930+0000 mon.c (mon.1) 246 : audit [DBG] from='client.? 192.168.123.105:0/673700795' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:04 vm05 bash[68966]: cluster 2026-03-10T11:50:03.451995+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:04 vm05 bash[68966]: cluster 2026-03-10T11:50:03.451995+0000 mgr.y (mgr.44107) 208 : cluster [DBG] pgmap v95: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:04 vm05 bash[68966]: audit 2026-03-10T11:50:03.576893+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.54339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:04 vm05 bash[68966]: audit 2026-03-10T11:50:03.576893+0000 mgr.y (mgr.44107) 209 : audit [DBG] from='client.54339 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:04 vm05 bash[68966]: audit 2026-03-10T11:50:04.076930+0000 mon.c (mon.1) 246 : audit [DBG] from='client.? 192.168.123.105:0/673700795' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:05.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:04 vm05 bash[68966]: audit 2026-03-10T11:50:04.076930+0000 mon.c (mon.1) 246 : audit [DBG] from='client.? 192.168.123.105:0/673700795' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:05.918 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:50:05.968 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:50:06.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:05 vm05 bash[65415]: audit 2026-03-10T11:50:04.534689+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:06.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:05 vm05 bash[65415]: audit 2026-03-10T11:50:04.534689+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:06.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:05 vm05 bash[65415]: audit 2026-03-10T11:50:05.483055+0000 mon.c (mon.1) 247 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:06.084 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:05 vm05 bash[65415]: audit 2026-03-10T11:50:05.483055+0000 mon.c (mon.1) 247 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:06.084 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:05 vm05 bash[68966]: audit 2026-03-10T11:50:04.534689+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:06.085 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:05 vm05 bash[68966]: audit 2026-03-10T11:50:04.534689+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:06.085 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:05 vm05 bash[68966]: audit 2026-03-10T11:50:05.483055+0000 mon.c (mon.1) 247 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:06.085 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:05 vm05 bash[68966]: audit 2026-03-10T11:50:05.483055+0000 mon.c (mon.1) 247 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:06.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:05 vm07 bash[46158]: audit 2026-03-10T11:50:04.534689+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:06.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:05 vm07 bash[46158]: audit 2026-03-10T11:50:04.534689+0000 mgr.y (mgr.44107) 210 : audit [DBG] from='client.44314 -' entity='client.admin' cmd=[{"prefix": "orch upgrade check", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:06.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:05 vm07 bash[46158]: audit 2026-03-10T11:50:05.483055+0000 mon.c (mon.1) 247 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:06.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:05 vm07 bash[46158]: audit 2026-03-10T11:50:05.483055+0000 mon.c (mon.1) 247 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:50:06.384 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:50:06.431 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:50:06.860 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:50:06.912 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --daemon-types crash,osd' 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:06 vm05 bash[65415]: cluster 2026-03-10T11:50:05.452339+0000 mgr.y (mgr.44107) 211 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:06 vm05 bash[65415]: cluster 2026-03-10T11:50:05.452339+0000 mgr.y (mgr.44107) 211 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:06 vm05 bash[65415]: audit 2026-03-10T11:50:06.861229+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.105:0/621474991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:06 vm05 bash[65415]: audit 2026-03-10T11:50:06.861229+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.105:0/621474991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:06 vm05 bash[68966]: cluster 2026-03-10T11:50:05.452339+0000 mgr.y (mgr.44107) 211 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:06 vm05 bash[68966]: cluster 2026-03-10T11:50:05.452339+0000 mgr.y (mgr.44107) 211 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:06 vm05 bash[68966]: audit 2026-03-10T11:50:06.861229+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.105:0/621474991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:50:07.121 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:06 vm05 bash[68966]: audit 2026-03-10T11:50:06.861229+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.105:0/621474991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:50:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:06 vm07 bash[46158]: cluster 2026-03-10T11:50:05.452339+0000 mgr.y (mgr.44107) 211 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:06 vm07 bash[46158]: cluster 2026-03-10T11:50:05.452339+0000 mgr.y (mgr.44107) 211 : cluster [DBG] pgmap v96: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:06 vm07 bash[46158]: audit 2026-03-10T11:50:06.861229+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.105:0/621474991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:50:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:06 vm07 bash[46158]: audit 2026-03-10T11:50:06.861229+0000 mon.b (mon.2) 28 : audit [DBG] from='client.? 192.168.123.105:0/621474991' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:50:08.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:07 vm07 bash[46158]: audit 2026-03-10T11:50:06.388899+0000 mgr.y (mgr.44107) 212 : audit [DBG] from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:08.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:07 vm07 bash[46158]: audit 2026-03-10T11:50:06.388899+0000 mgr.y (mgr.44107) 212 : audit [DBG] from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:08.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:07 vm05 bash[65415]: audit 2026-03-10T11:50:06.388899+0000 mgr.y (mgr.44107) 212 : audit [DBG] from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:08.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:07 vm05 bash[65415]: audit 2026-03-10T11:50:06.388899+0000 mgr.y (mgr.44107) 212 : audit [DBG] from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:08.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:07 vm05 bash[68966]: audit 2026-03-10T11:50:06.388899+0000 mgr.y (mgr.44107) 212 : audit [DBG] from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:08.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:07 vm05 bash[68966]: audit 2026-03-10T11:50:06.388899+0000 mgr.y (mgr.44107) 212 : audit [DBG] from='client.34343 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:08.674 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:08.753 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:07.315624+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:07.315624+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: cluster 2026-03-10T11:50:07.452779+0000 mgr.y (mgr.44107) 214 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: cluster 2026-03-10T11:50:07.452779+0000 mgr.y (mgr.44107) 214 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.673477+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.673477+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.676870+0000 mon.c (mon.1) 248 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.676870+0000 mon.c (mon.1) 248 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.684477+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.684477+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.685169+0000 mon.c (mon.1) 250 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.685169+0000 mon.c (mon.1) 250 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.690125+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:08 vm05 bash[68966]: audit 2026-03-10T11:50:08.690125+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:50:08] "GET /metrics HTTP/1.1" 200 37754 "" "Prometheus/2.51.0" 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:07.315624+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:07.315624+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: cluster 2026-03-10T11:50:07.452779+0000 mgr.y (mgr.44107) 214 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: cluster 2026-03-10T11:50:07.452779+0000 mgr.y (mgr.44107) 214 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.673477+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.673477+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.676870+0000 mon.c (mon.1) 248 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.676870+0000 mon.c (mon.1) 248 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:09.010 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.684477+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:09.011 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.684477+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:09.011 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.685169+0000 mon.c (mon.1) 250 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:09.011 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.685169+0000 mon.c (mon.1) 250 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:09.011 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.690125+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.011 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:08 vm05 bash[65415]: audit 2026-03-10T11:50:08.690125+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:07.315624+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:07.315624+0000 mgr.y (mgr.44107) 213 : audit [DBG] from='client.54369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "daemon_types": "crash,osd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: cluster 2026-03-10T11:50:07.452779+0000 mgr.y (mgr.44107) 214 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: cluster 2026-03-10T11:50:07.452779+0000 mgr.y (mgr.44107) 214 : cluster [DBG] pgmap v97: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.673477+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.673477+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.676870+0000 mon.c (mon.1) 248 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.676870+0000 mon.c (mon.1) 248 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.684477+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.684477+0000 mon.c (mon.1) 249 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.685169+0000 mon.c (mon.1) 250 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.685169+0000 mon.c (mon.1) 250 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.690125+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:08 vm07 bash[46158]: audit 2026-03-10T11:50:08.690125+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:09.230 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (16m) 27s ago 23m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (4m) 2m ago 23m 64.6M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (4m) 27s ago 22m 44.0M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (4m) 2m ago 26m 465M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (13m) 27s ago 27m 528M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (2m) 27s ago 27m 44.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (3m) 2m ago 26m 37.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:50:09.590 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (2m) 27s ago 26m 41.1M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (16m) 27s ago 23m 7988k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (16m) 2m ago 23m 7816k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (32s) 27s ago 26m 13.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (25m) 27s ago 25m 58.7M 4096M 17.2.0 e1d6a67b021e 66628e3a12c8 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (85s) 27s ago 25m 44.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (101s) 27s ago 25m 67.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (24m) 2m ago 24m 56.0M 4096M 17.2.0 e1d6a67b021e 452f5de332b6 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (24m) 2m ago 24m 52.5M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (24m) 2m ago 24m 51.5M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (24m) 2m ago 24m 54.1M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (4m) 2m ago 23m 40.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (23m) 27s ago 23m 89.0M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:50:09.591 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (23m) 2m ago 23m 89.3M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:09.810 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 5, 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 7, 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:50:09.811 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) crash,osd", 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:50:10.007 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: cephadm 2026-03-10T11:50:08.668584+0000 mgr.y (mgr.44107) 215 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: cephadm 2026-03-10T11:50:08.668584+0000 mgr.y (mgr.44107) 215 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: cephadm 2026-03-10T11:50:08.746217+0000 mgr.y (mgr.44107) 216 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: cephadm 2026-03-10T11:50:08.746217+0000 mgr.y (mgr.44107) 216 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: audit 2026-03-10T11:50:09.111882+0000 mgr.y (mgr.44107) 217 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: audit 2026-03-10T11:50:09.111882+0000 mgr.y (mgr.44107) 217 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: audit 2026-03-10T11:50:09.814902+0000 mon.c (mon.1) 251 : audit [DBG] from='client.? 192.168.123.105:0/3722095463' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:09 vm05 bash[65415]: audit 2026-03-10T11:50:09.814902+0000 mon.c (mon.1) 251 : audit [DBG] from='client.? 192.168.123.105:0/3722095463' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: cephadm 2026-03-10T11:50:08.668584+0000 mgr.y (mgr.44107) 215 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: cephadm 2026-03-10T11:50:08.668584+0000 mgr.y (mgr.44107) 215 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: cephadm 2026-03-10T11:50:08.746217+0000 mgr.y (mgr.44107) 216 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: cephadm 2026-03-10T11:50:08.746217+0000 mgr.y (mgr.44107) 216 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: audit 2026-03-10T11:50:09.111882+0000 mgr.y (mgr.44107) 217 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: audit 2026-03-10T11:50:09.111882+0000 mgr.y (mgr.44107) 217 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: audit 2026-03-10T11:50:09.814902+0000 mon.c (mon.1) 251 : audit [DBG] from='client.? 192.168.123.105:0/3722095463' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:10.146 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:09 vm05 bash[68966]: audit 2026-03-10T11:50:09.814902+0000 mon.c (mon.1) 251 : audit [DBG] from='client.? 192.168.123.105:0/3722095463' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: cephadm 2026-03-10T11:50:08.668584+0000 mgr.y (mgr.44107) 215 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: cephadm 2026-03-10T11:50:08.668584+0000 mgr.y (mgr.44107) 215 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: cephadm 2026-03-10T11:50:08.746217+0000 mgr.y (mgr.44107) 216 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: cephadm 2026-03-10T11:50:08.746217+0000 mgr.y (mgr.44107) 216 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: audit 2026-03-10T11:50:09.111882+0000 mgr.y (mgr.44107) 217 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: audit 2026-03-10T11:50:09.111882+0000 mgr.y (mgr.44107) 217 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: audit 2026-03-10T11:50:09.814902+0000 mon.c (mon.1) 251 : audit [DBG] from='client.? 192.168.123.105:0/3722095463' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:10.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:09 vm07 bash[46158]: audit 2026-03-10T11:50:09.814902+0000 mon.c (mon.1) 251 : audit [DBG] from='client.? 192.168.123.105:0/3722095463' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:09.225669+0000 mgr.y (mgr.44107) 218 : audit [DBG] from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:09.225669+0000 mgr.y (mgr.44107) 218 : audit [DBG] from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:09.409031+0000 mgr.y (mgr.44107) 219 : audit [DBG] from='client.54381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:09.409031+0000 mgr.y (mgr.44107) 219 : audit [DBG] from='client.54381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: cluster 2026-03-10T11:50:09.453027+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: cluster 2026-03-10T11:50:09.453027+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:09.591049+0000 mgr.y (mgr.44107) 221 : audit [DBG] from='client.44341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:09.591049+0000 mgr.y (mgr.44107) 221 : audit [DBG] from='client.44341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.011544+0000 mgr.y (mgr.44107) 222 : audit [DBG] from='client.44353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.011544+0000 mgr.y (mgr.44107) 222 : audit [DBG] from='client.44353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.194540+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.194540+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.196745+0000 mon.c (mon.1) 252 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.196745+0000 mon.c (mon.1) 252 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.197708+0000 mon.c (mon.1) 253 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.197708+0000 mon.c (mon.1) 253 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.200906+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.200906+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.203326+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.203326+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.206305+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.206305+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.208512+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.208512+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.211306+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.211306+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.213642+0000 mon.c (mon.1) 256 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.213642+0000 mon.c (mon.1) 256 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.602883+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.602883+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.605398+0000 mon.c (mon.1) 257 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.605398+0000 mon.c (mon.1) 257 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.605897+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:10 vm05 bash[65415]: audit 2026-03-10T11:50:10.605897+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:09.225669+0000 mgr.y (mgr.44107) 218 : audit [DBG] from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:09.225669+0000 mgr.y (mgr.44107) 218 : audit [DBG] from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:09.409031+0000 mgr.y (mgr.44107) 219 : audit [DBG] from='client.54381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:09.409031+0000 mgr.y (mgr.44107) 219 : audit [DBG] from='client.54381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: cluster 2026-03-10T11:50:09.453027+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: cluster 2026-03-10T11:50:09.453027+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:09.591049+0000 mgr.y (mgr.44107) 221 : audit [DBG] from='client.44341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:09.591049+0000 mgr.y (mgr.44107) 221 : audit [DBG] from='client.44341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.011544+0000 mgr.y (mgr.44107) 222 : audit [DBG] from='client.44353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.011544+0000 mgr.y (mgr.44107) 222 : audit [DBG] from='client.44353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.194540+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.194540+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.196745+0000 mon.c (mon.1) 252 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.196745+0000 mon.c (mon.1) 252 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.197708+0000 mon.c (mon.1) 253 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.197708+0000 mon.c (mon.1) 253 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.200906+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.200906+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.203326+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.203326+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.206305+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.206305+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.208512+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.208512+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.211306+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.211306+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.213642+0000 mon.c (mon.1) 256 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.213642+0000 mon.c (mon.1) 256 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.602883+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.602883+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.605398+0000 mon.c (mon.1) 257 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.605398+0000 mon.c (mon.1) 257 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.605897+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:11.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:10 vm05 bash[68966]: audit 2026-03-10T11:50:10.605897+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:09.225669+0000 mgr.y (mgr.44107) 218 : audit [DBG] from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:09.225669+0000 mgr.y (mgr.44107) 218 : audit [DBG] from='client.44335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:09.409031+0000 mgr.y (mgr.44107) 219 : audit [DBG] from='client.54381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:09.409031+0000 mgr.y (mgr.44107) 219 : audit [DBG] from='client.54381 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: cluster 2026-03-10T11:50:09.453027+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: cluster 2026-03-10T11:50:09.453027+0000 mgr.y (mgr.44107) 220 : cluster [DBG] pgmap v98: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:09.591049+0000 mgr.y (mgr.44107) 221 : audit [DBG] from='client.44341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:09.591049+0000 mgr.y (mgr.44107) 221 : audit [DBG] from='client.44341 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.011544+0000 mgr.y (mgr.44107) 222 : audit [DBG] from='client.44353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.011544+0000 mgr.y (mgr.44107) 222 : audit [DBG] from='client.44353 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.194540+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.194540+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.196745+0000 mon.c (mon.1) 252 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.196745+0000 mon.c (mon.1) 252 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.197708+0000 mon.c (mon.1) 253 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.197708+0000 mon.c (mon.1) 253 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.200906+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.200906+0000 mon.a (mon.0) 343 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.203326+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.203326+0000 mon.c (mon.1) 254 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.206305+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.206305+0000 mon.a (mon.0) 344 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.208512+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.208512+0000 mon.c (mon.1) 255 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.211306+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.211306+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.213642+0000 mon.c (mon.1) 256 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.213642+0000 mon.c (mon.1) 256 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.602883+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.602883+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.605398+0000 mon.c (mon.1) 257 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.605398+0000 mon.c (mon.1) 257 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.605897+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:11.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:10 vm07 bash[46158]: audit 2026-03-10T11:50:10.605897+0000 mon.c (mon.1) 258 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:11.357 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.357 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.357 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.357 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.357 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.357 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.358 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.358 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.358 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:11.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:11 vm05 systemd[1]: Stopping Ceph osd.1 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:50:11.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:11 vm05 bash[28302]: debug 2026-03-10T11:50:11.389+0000 7fa28149b700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:50:11.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:11 vm05 bash[28302]: debug 2026-03-10T11:50:11.389+0000 7fa28149b700 -1 osd.1 117 *** Got signal Terminated *** 2026-03-10T11:50:11.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:11 vm05 bash[28302]: debug 2026-03-10T11:50:11.389+0000 7fa28149b700 -1 osd.1 117 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.195854+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.195854+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.195880+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.195880+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.198151+0000 mgr.y (mgr.44107) 225 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.198151+0000 mgr.y (mgr.44107) 225 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.203823+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.203823+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.208953+0000 mgr.y (mgr.44107) 227 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.208953+0000 mgr.y (mgr.44107) 227 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: audit 2026-03-10T11:50:10.213748+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: audit 2026-03-10T11:50:10.213748+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.214294+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.214294+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.598738+0000 mgr.y (mgr.44107) 230 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.598738+0000 mgr.y (mgr.44107) 230 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.607148+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cephadm 2026-03-10T11:50:10.607148+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cluster 2026-03-10T11:50:11.396104+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T11:50:12.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:11 vm07 bash[46158]: cluster 2026-03-10T11:50:11.396104+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.195854+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.195854+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.195880+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.195880+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.198151+0000 mgr.y (mgr.44107) 225 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.198151+0000 mgr.y (mgr.44107) 225 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.203823+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:50:12.225 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.203823+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.208953+0000 mgr.y (mgr.44107) 227 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.208953+0000 mgr.y (mgr.44107) 227 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: audit 2026-03-10T11:50:10.213748+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: audit 2026-03-10T11:50:10.213748+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.214294+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.214294+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.598738+0000 mgr.y (mgr.44107) 230 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.598738+0000 mgr.y (mgr.44107) 230 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.607148+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cephadm 2026-03-10T11:50:10.607148+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cluster 2026-03-10T11:50:11.396104+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:11 vm05 bash[68966]: cluster 2026-03-10T11:50:11.396104+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.195854+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.195854+0000 mgr.y (mgr.44107) 223 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.195880+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.195880+0000 mgr.y (mgr.44107) 224 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.198151+0000 mgr.y (mgr.44107) 225 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.198151+0000 mgr.y (mgr.44107) 225 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.203823+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.203823+0000 mgr.y (mgr.44107) 226 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.208953+0000 mgr.y (mgr.44107) 227 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.208953+0000 mgr.y (mgr.44107) 227 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: audit 2026-03-10T11:50:10.213748+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: audit 2026-03-10T11:50:10.213748+0000 mgr.y (mgr.44107) 228 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["1"], "max": 16}]: dispatch 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.214294+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.214294+0000 mgr.y (mgr.44107) 229 : cephadm [INF] Upgrade: osd.1 is safe to restart 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.598738+0000 mgr.y (mgr.44107) 230 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.598738+0000 mgr.y (mgr.44107) 230 : cephadm [INF] Upgrade: Updating osd.1 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.607148+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cephadm 2026-03-10T11:50:10.607148+0000 mgr.y (mgr.44107) 231 : cephadm [INF] Deploying daemon osd.1 on vm05 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cluster 2026-03-10T11:50:11.396104+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T11:50:12.226 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:11 vm05 bash[65415]: cluster 2026-03-10T11:50:11.396104+0000 mon.a (mon.0) 347 : cluster [INF] osd.1 marked itself down and dead 2026-03-10T11:50:12.226 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:11 vm05 bash[92621]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-1 2026-03-10T11:50:12.507 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.507 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.507 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.1.service: Deactivated successfully. 2026-03-10T11:50:12.507 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: Stopped Ceph osd.1 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:12.507 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.507 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: Started Ceph osd.1 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:12.507 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.508 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.508 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.508 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.508 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.508 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:12 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:12.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:12 vm05 bash[92833]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:12.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:12 vm05 bash[92833]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: cluster 2026-03-10T11:50:11.453414+0000 mgr.y (mgr.44107) 232 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: cluster 2026-03-10T11:50:11.453414+0000 mgr.y (mgr.44107) 232 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: cluster 2026-03-10T11:50:11.897545+0000 mon.a (mon.0) 348 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: cluster 2026-03-10T11:50:11.897545+0000 mon.a (mon.0) 348 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: cluster 2026-03-10T11:50:11.908370+0000 mon.a (mon.0) 349 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: cluster 2026-03-10T11:50:11.908370+0000 mon.a (mon.0) 349 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: audit 2026-03-10T11:50:12.485438+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: audit 2026-03-10T11:50:12.485438+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: audit 2026-03-10T11:50:12.490612+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: audit 2026-03-10T11:50:12.490612+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: audit 2026-03-10T11:50:12.492312+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:12 vm07 bash[46158]: audit 2026-03-10T11:50:12.492312+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: cluster 2026-03-10T11:50:11.453414+0000 mgr.y (mgr.44107) 232 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: cluster 2026-03-10T11:50:11.453414+0000 mgr.y (mgr.44107) 232 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: cluster 2026-03-10T11:50:11.897545+0000 mon.a (mon.0) 348 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: cluster 2026-03-10T11:50:11.897545+0000 mon.a (mon.0) 348 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: cluster 2026-03-10T11:50:11.908370+0000 mon.a (mon.0) 349 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: cluster 2026-03-10T11:50:11.908370+0000 mon.a (mon.0) 349 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: audit 2026-03-10T11:50:12.485438+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: audit 2026-03-10T11:50:12.485438+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: audit 2026-03-10T11:50:12.490612+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: audit 2026-03-10T11:50:12.490612+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: audit 2026-03-10T11:50:12.492312+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:13.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:12 vm05 bash[65415]: audit 2026-03-10T11:50:12.492312+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: cluster 2026-03-10T11:50:11.453414+0000 mgr.y (mgr.44107) 232 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: cluster 2026-03-10T11:50:11.453414+0000 mgr.y (mgr.44107) 232 : cluster [DBG] pgmap v99: 161 pgs: 161 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: cluster 2026-03-10T11:50:11.897545+0000 mon.a (mon.0) 348 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: cluster 2026-03-10T11:50:11.897545+0000 mon.a (mon.0) 348 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: cluster 2026-03-10T11:50:11.908370+0000 mon.a (mon.0) 349 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: cluster 2026-03-10T11:50:11.908370+0000 mon.a (mon.0) 349 : cluster [DBG] osdmap e118: 8 total, 7 up, 8 in 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: audit 2026-03-10T11:50:12.485438+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: audit 2026-03-10T11:50:12.485438+0000 mon.a (mon.0) 350 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: audit 2026-03-10T11:50:12.490612+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: audit 2026-03-10T11:50:12.490612+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: audit 2026-03-10T11:50:12.492312+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:13.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:12 vm05 bash[68966]: audit 2026-03-10T11:50:12.492312+0000 mon.c (mon.1) 259 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:13.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:50:13.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:13.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:13.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T11:50:13.840 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-311c59b2-5967-4b3d-8950-9c3c9b304be2/osd-block-9cbc5424-3289-45dc-8763-da809c9c9e84 --path /var/lib/ceph/osd/ceph-1 --no-mon-config 2026-03-10T11:50:14.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:13 vm07 bash[46158]: cluster 2026-03-10T11:50:12.914199+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T11:50:14.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:13 vm07 bash[46158]: cluster 2026-03-10T11:50:12.914199+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T11:50:14.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:13 vm05 bash[65415]: cluster 2026-03-10T11:50:12.914199+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T11:50:14.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:13 vm05 bash[65415]: cluster 2026-03-10T11:50:12.914199+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T11:50:14.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/ln -snf /dev/ceph-311c59b2-5967-4b3d-8950-9c3c9b304be2/osd-block-9cbc5424-3289-45dc-8763-da809c9c9e84 /var/lib/ceph/osd/ceph-1/block 2026-03-10T11:50:14.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block 2026-03-10T11:50:14.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T11:50:14.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-10T11:50:14.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:13 vm05 bash[92833]: --> ceph-volume lvm activate successful for osd ID: 1 2026-03-10T11:50:14.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:13 vm05 bash[68966]: cluster 2026-03-10T11:50:12.914199+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T11:50:14.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:13 vm05 bash[68966]: cluster 2026-03-10T11:50:12.914199+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e119: 8 total, 7 up, 8 in 2026-03-10T11:50:14.918 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:14 vm05 bash[93177]: debug 2026-03-10T11:50:14.669+0000 7f2202688740 -1 Falling back to public interface 2026-03-10T11:50:14.918 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:14 vm05 bash[68966]: cluster 2026-03-10T11:50:13.453701+0000 mgr.y (mgr.44107) 233 : cluster [DBG] pgmap v102: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:14.918 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:14 vm05 bash[68966]: cluster 2026-03-10T11:50:13.453701+0000 mgr.y (mgr.44107) 233 : cluster [DBG] pgmap v102: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:14 vm07 bash[46158]: cluster 2026-03-10T11:50:13.453701+0000 mgr.y (mgr.44107) 233 : cluster [DBG] pgmap v102: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:14 vm07 bash[46158]: cluster 2026-03-10T11:50:13.453701+0000 mgr.y (mgr.44107) 233 : cluster [DBG] pgmap v102: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:15.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:14 vm05 bash[65415]: cluster 2026-03-10T11:50:13.453701+0000 mgr.y (mgr.44107) 233 : cluster [DBG] pgmap v102: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:15.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:14 vm05 bash[65415]: cluster 2026-03-10T11:50:13.453701+0000 mgr.y (mgr.44107) 233 : cluster [DBG] pgmap v102: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:15 vm07 bash[46158]: audit 2026-03-10T11:50:15.903230+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:50:16.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:15 vm07 bash[46158]: audit 2026-03-10T11:50:15.903230+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:50:16.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:15 vm05 bash[68966]: audit 2026-03-10T11:50:15.903230+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:50:16.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:15 vm05 bash[68966]: audit 2026-03-10T11:50:15.903230+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:50:16.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:15 vm05 bash[65415]: audit 2026-03-10T11:50:15.903230+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:50:16.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:15 vm05 bash[65415]: audit 2026-03-10T11:50:15.903230+0000 mon.a (mon.0) 353 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-10T11:50:16.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:15 vm05 bash[93177]: debug 2026-03-10T11:50:15.881+0000 7f2202688740 -1 osd.1 0 read_superblock omap replica is missing. 2026-03-10T11:50:16.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:15 vm05 bash[93177]: debug 2026-03-10T11:50:15.897+0000 7f2202688740 -1 osd.1 117 log_to_monitors true 2026-03-10T11:50:16.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:50:15 vm05 bash[93177]: debug 2026-03-10T11:50:15.945+0000 7f21fa433640 -1 osd.1 117 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: cluster 2026-03-10T11:50:15.454081+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v103: 161 pgs: 6 active+undersized, 19 stale+active+clean, 2 active+undersized+degraded, 134 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: cluster 2026-03-10T11:50:15.454081+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v103: 161 pgs: 6 active+undersized, 19 stale+active+clean, 2 active+undersized+degraded, 134 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: cluster 2026-03-10T11:50:15.922462+0000 mon.a (mon.0) 354 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: cluster 2026-03-10T11:50:15.922462+0000 mon.a (mon.0) 354 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: audit 2026-03-10T11:50:15.930275+0000 mon.a (mon.0) 355 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: audit 2026-03-10T11:50:15.930275+0000 mon.a (mon.0) 355 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: cluster 2026-03-10T11:50:15.935499+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: cluster 2026-03-10T11:50:15.935499+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: audit 2026-03-10T11:50:15.935843+0000 mon.a (mon.0) 357 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:50:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:16 vm07 bash[46158]: audit 2026-03-10T11:50:15.935843+0000 mon.a (mon.0) 357 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: cluster 2026-03-10T11:50:15.454081+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v103: 161 pgs: 6 active+undersized, 19 stale+active+clean, 2 active+undersized+degraded, 134 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: cluster 2026-03-10T11:50:15.454081+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v103: 161 pgs: 6 active+undersized, 19 stale+active+clean, 2 active+undersized+degraded, 134 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: cluster 2026-03-10T11:50:15.922462+0000 mon.a (mon.0) 354 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: cluster 2026-03-10T11:50:15.922462+0000 mon.a (mon.0) 354 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: audit 2026-03-10T11:50:15.930275+0000 mon.a (mon.0) 355 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: audit 2026-03-10T11:50:15.930275+0000 mon.a (mon.0) 355 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: cluster 2026-03-10T11:50:15.935499+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: cluster 2026-03-10T11:50:15.935499+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: audit 2026-03-10T11:50:15.935843+0000 mon.a (mon.0) 357 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:16 vm05 bash[65415]: audit 2026-03-10T11:50:15.935843+0000 mon.a (mon.0) 357 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: cluster 2026-03-10T11:50:15.454081+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v103: 161 pgs: 6 active+undersized, 19 stale+active+clean, 2 active+undersized+degraded, 134 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: cluster 2026-03-10T11:50:15.454081+0000 mgr.y (mgr.44107) 234 : cluster [DBG] pgmap v103: 161 pgs: 6 active+undersized, 19 stale+active+clean, 2 active+undersized+degraded, 134 active+clean; 457 KiB data, 166 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s; 7/627 objects degraded (1.116%) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: cluster 2026-03-10T11:50:15.922462+0000 mon.a (mon.0) 354 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: cluster 2026-03-10T11:50:15.922462+0000 mon.a (mon.0) 354 : cluster [WRN] Health check failed: Degraded data redundancy: 7/627 objects degraded (1.116%), 2 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: audit 2026-03-10T11:50:15.930275+0000 mon.a (mon.0) 355 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: audit 2026-03-10T11:50:15.930275+0000 mon.a (mon.0) 355 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: cluster 2026-03-10T11:50:15.935499+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: cluster 2026-03-10T11:50:15.935499+0000 mon.a (mon.0) 356 : cluster [DBG] osdmap e120: 8 total, 7 up, 8 in 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: audit 2026-03-10T11:50:15.935843+0000 mon.a (mon.0) 357 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:50:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:16 vm05 bash[68966]: audit 2026-03-10T11:50:15.935843+0000 mon.a (mon.0) 357 : audit [INF] from='osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm05", "root=default"]}]: dispatch 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: cluster 2026-03-10T11:50:16.932028+0000 mon.a (mon.0) 358 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: cluster 2026-03-10T11:50:16.932028+0000 mon.a (mon.0) 358 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: cluster 2026-03-10T11:50:16.946564+0000 mon.a (mon.0) 359 : cluster [INF] osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213] boot 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: cluster 2026-03-10T11:50:16.946564+0000 mon.a (mon.0) 359 : cluster [INF] osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213] boot 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: cluster 2026-03-10T11:50:16.946600+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: cluster 2026-03-10T11:50:16.946600+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: audit 2026-03-10T11:50:16.953167+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:50:18.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:17 vm07 bash[46158]: audit 2026-03-10T11:50:16.953167+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: cluster 2026-03-10T11:50:16.932028+0000 mon.a (mon.0) 358 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: cluster 2026-03-10T11:50:16.932028+0000 mon.a (mon.0) 358 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: cluster 2026-03-10T11:50:16.946564+0000 mon.a (mon.0) 359 : cluster [INF] osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213] boot 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: cluster 2026-03-10T11:50:16.946564+0000 mon.a (mon.0) 359 : cluster [INF] osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213] boot 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: cluster 2026-03-10T11:50:16.946600+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: cluster 2026-03-10T11:50:16.946600+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: audit 2026-03-10T11:50:16.953167+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:17 vm05 bash[65415]: audit 2026-03-10T11:50:16.953167+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:50:18.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: cluster 2026-03-10T11:50:16.932028+0000 mon.a (mon.0) 358 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: cluster 2026-03-10T11:50:16.932028+0000 mon.a (mon.0) 358 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: cluster 2026-03-10T11:50:16.946564+0000 mon.a (mon.0) 359 : cluster [INF] osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213] boot 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: cluster 2026-03-10T11:50:16.946564+0000 mon.a (mon.0) 359 : cluster [INF] osd.1 [v2:192.168.123.105:6810/4016513213,v1:192.168.123.105:6811/4016513213] boot 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: cluster 2026-03-10T11:50:16.946600+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: cluster 2026-03-10T11:50:16.946600+0000 mon.a (mon.0) 360 : cluster [DBG] osdmap e121: 8 total, 8 up, 8 in 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: audit 2026-03-10T11:50:16.953167+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:50:18.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:17 vm05 bash[68966]: audit 2026-03-10T11:50:16.953167+0000 mon.c (mon.1) 260 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: cluster 2026-03-10T11:50:17.454424+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v106: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: cluster 2026-03-10T11:50:17.454424+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v106: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: cluster 2026-03-10T11:50:17.948414+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: cluster 2026-03-10T11:50:17.948414+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: audit 2026-03-10T11:50:18.866731+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: audit 2026-03-10T11:50:18.866731+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: audit 2026-03-10T11:50:18.871528+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:18 vm05 bash[68966]: audit 2026-03-10T11:50:18.871528+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:50:18] "GET /metrics HTTP/1.1" 200 37754 "" "Prometheus/2.51.0" 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: cluster 2026-03-10T11:50:17.454424+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v106: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: cluster 2026-03-10T11:50:17.454424+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v106: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: cluster 2026-03-10T11:50:17.948414+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: cluster 2026-03-10T11:50:17.948414+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: audit 2026-03-10T11:50:18.866731+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: audit 2026-03-10T11:50:18.866731+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: audit 2026-03-10T11:50:18.871528+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.116 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:18 vm05 bash[65415]: audit 2026-03-10T11:50:18.871528+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: cluster 2026-03-10T11:50:17.454424+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v106: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: cluster 2026-03-10T11:50:17.454424+0000 mgr.y (mgr.44107) 235 : cluster [DBG] pgmap v106: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: cluster 2026-03-10T11:50:17.948414+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: cluster 2026-03-10T11:50:17.948414+0000 mon.a (mon.0) 361 : cluster [DBG] osdmap e122: 8 total, 8 up, 8 in 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: audit 2026-03-10T11:50:18.866731+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: audit 2026-03-10T11:50:18.866731+0000 mon.a (mon.0) 362 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: audit 2026-03-10T11:50:18.871528+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:18 vm07 bash[46158]: audit 2026-03-10T11:50:18.871528+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:19 vm07 bash[46158]: audit 2026-03-10T11:50:19.120024+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:20.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:19 vm07 bash[46158]: audit 2026-03-10T11:50:19.120024+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:20.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:19 vm07 bash[46158]: audit 2026-03-10T11:50:19.438518+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:19 vm07 bash[46158]: audit 2026-03-10T11:50:19.438518+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:19 vm07 bash[46158]: audit 2026-03-10T11:50:19.447720+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:19 vm07 bash[46158]: audit 2026-03-10T11:50:19.447720+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:19 vm05 bash[65415]: audit 2026-03-10T11:50:19.120024+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:19 vm05 bash[65415]: audit 2026-03-10T11:50:19.120024+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:19 vm05 bash[65415]: audit 2026-03-10T11:50:19.438518+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:19 vm05 bash[65415]: audit 2026-03-10T11:50:19.438518+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:19 vm05 bash[65415]: audit 2026-03-10T11:50:19.447720+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:19 vm05 bash[65415]: audit 2026-03-10T11:50:19.447720+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:19 vm05 bash[68966]: audit 2026-03-10T11:50:19.120024+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:19 vm05 bash[68966]: audit 2026-03-10T11:50:19.120024+0000 mgr.y (mgr.44107) 236 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:19 vm05 bash[68966]: audit 2026-03-10T11:50:19.438518+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:19 vm05 bash[68966]: audit 2026-03-10T11:50:19.438518+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:19 vm05 bash[68966]: audit 2026-03-10T11:50:19.447720+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:20.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:19 vm05 bash[68966]: audit 2026-03-10T11:50:19.447720+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:20 vm07 bash[46158]: cluster 2026-03-10T11:50:19.454776+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v108: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:20 vm07 bash[46158]: cluster 2026-03-10T11:50:19.454776+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v108: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:20 vm07 bash[46158]: audit 2026-03-10T11:50:20.487360+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:20 vm07 bash[46158]: audit 2026-03-10T11:50:20.487360+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:20 vm07 bash[46158]: audit 2026-03-10T11:50:20.488571+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:21.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:20 vm07 bash[46158]: audit 2026-03-10T11:50:20.488571+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:20 vm05 bash[68966]: cluster 2026-03-10T11:50:19.454776+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v108: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:20 vm05 bash[68966]: cluster 2026-03-10T11:50:19.454776+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v108: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:20 vm05 bash[68966]: audit 2026-03-10T11:50:20.487360+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:20 vm05 bash[68966]: audit 2026-03-10T11:50:20.487360+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:20 vm05 bash[68966]: audit 2026-03-10T11:50:20.488571+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:20 vm05 bash[68966]: audit 2026-03-10T11:50:20.488571+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:20 vm05 bash[65415]: cluster 2026-03-10T11:50:19.454776+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v108: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:20 vm05 bash[65415]: cluster 2026-03-10T11:50:19.454776+0000 mgr.y (mgr.44107) 237 : cluster [DBG] pgmap v108: 161 pgs: 44 active+undersized, 20 active+undersized+degraded, 97 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 83/627 objects degraded (13.238%) 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:20 vm05 bash[65415]: audit 2026-03-10T11:50:20.487360+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:20 vm05 bash[65415]: audit 2026-03-10T11:50:20.487360+0000 mon.a (mon.0) 366 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:20 vm05 bash[65415]: audit 2026-03-10T11:50:20.488571+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:21.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:20 vm05 bash[65415]: audit 2026-03-10T11:50:20.488571+0000 mon.c (mon.1) 261 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:22.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:21 vm05 bash[68966]: cluster 2026-03-10T11:50:21.946201+0000 mon.a (mon.0) 367 : cluster [WRN] Health check update: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:22.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:21 vm05 bash[68966]: cluster 2026-03-10T11:50:21.946201+0000 mon.a (mon.0) 367 : cluster [WRN] Health check update: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:22.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:22 vm05 bash[65415]: cluster 2026-03-10T11:50:21.946201+0000 mon.a (mon.0) 367 : cluster [WRN] Health check update: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:22.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:22 vm05 bash[65415]: cluster 2026-03-10T11:50:21.946201+0000 mon.a (mon.0) 367 : cluster [WRN] Health check update: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:21 vm07 bash[46158]: cluster 2026-03-10T11:50:21.946201+0000 mon.a (mon.0) 367 : cluster [WRN] Health check update: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:21 vm07 bash[46158]: cluster 2026-03-10T11:50:21.946201+0000 mon.a (mon.0) 367 : cluster [WRN] Health check update: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:23.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:23 vm05 bash[68966]: cluster 2026-03-10T11:50:21.455227+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v109: 161 pgs: 18 active+undersized, 10 active+undersized+degraded, 133 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 35/627 objects degraded (5.582%) 2026-03-10T11:50:23.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:23 vm05 bash[68966]: cluster 2026-03-10T11:50:21.455227+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v109: 161 pgs: 18 active+undersized, 10 active+undersized+degraded, 133 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 35/627 objects degraded (5.582%) 2026-03-10T11:50:23.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:23 vm05 bash[65415]: cluster 2026-03-10T11:50:21.455227+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v109: 161 pgs: 18 active+undersized, 10 active+undersized+degraded, 133 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 35/627 objects degraded (5.582%) 2026-03-10T11:50:23.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:23 vm05 bash[65415]: cluster 2026-03-10T11:50:21.455227+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v109: 161 pgs: 18 active+undersized, 10 active+undersized+degraded, 133 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 35/627 objects degraded (5.582%) 2026-03-10T11:50:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:22 vm07 bash[46158]: cluster 2026-03-10T11:50:21.455227+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v109: 161 pgs: 18 active+undersized, 10 active+undersized+degraded, 133 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 35/627 objects degraded (5.582%) 2026-03-10T11:50:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:22 vm07 bash[46158]: cluster 2026-03-10T11:50:21.455227+0000 mgr.y (mgr.44107) 238 : cluster [DBG] pgmap v109: 161 pgs: 18 active+undersized, 10 active+undersized+degraded, 133 active+clean; 457 KiB data, 184 MiB used, 160 GiB / 160 GiB avail; 35/627 objects degraded (5.582%) 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:24 vm05 bash[68966]: cluster 2026-03-10T11:50:24.001224+0000 mon.a (mon.0) 368 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded) 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:24 vm05 bash[68966]: cluster 2026-03-10T11:50:24.001224+0000 mon.a (mon.0) 368 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded) 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:24 vm05 bash[68966]: cluster 2026-03-10T11:50:24.001239+0000 mon.a (mon.0) 369 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:24 vm05 bash[68966]: cluster 2026-03-10T11:50:24.001239+0000 mon.a (mon.0) 369 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:24 vm05 bash[65415]: cluster 2026-03-10T11:50:24.001224+0000 mon.a (mon.0) 368 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded) 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:24 vm05 bash[65415]: cluster 2026-03-10T11:50:24.001224+0000 mon.a (mon.0) 368 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded) 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:24 vm05 bash[65415]: cluster 2026-03-10T11:50:24.001239+0000 mon.a (mon.0) 369 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:24.310 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:24 vm05 bash[65415]: cluster 2026-03-10T11:50:24.001239+0000 mon.a (mon.0) 369 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:24 vm07 bash[46158]: cluster 2026-03-10T11:50:24.001224+0000 mon.a (mon.0) 368 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded) 2026-03-10T11:50:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:24 vm07 bash[46158]: cluster 2026-03-10T11:50:24.001224+0000 mon.a (mon.0) 368 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 35/627 objects degraded (5.582%), 10 pgs degraded) 2026-03-10T11:50:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:24 vm07 bash[46158]: cluster 2026-03-10T11:50:24.001239+0000 mon.a (mon.0) 369 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:24 vm07 bash[46158]: cluster 2026-03-10T11:50:24.001239+0000 mon.a (mon.0) 369 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: cluster 2026-03-10T11:50:23.455617+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: cluster 2026-03-10T11:50:23.455617+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.034024+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.034024+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.041099+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.041099+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.044074+0000 mon.c (mon.1) 262 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.044074+0000 mon.c (mon.1) 262 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.044963+0000 mon.c (mon.1) 263 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.044963+0000 mon.c (mon.1) 263 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.048725+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:25 vm05 bash[68966]: audit 2026-03-10T11:50:25.048725+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: cluster 2026-03-10T11:50:23.455617+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:50:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: cluster 2026-03-10T11:50:23.455617+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.034024+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.034024+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.041099+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.041099+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.044074+0000 mon.c (mon.1) 262 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.044074+0000 mon.c (mon.1) 262 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.044963+0000 mon.c (mon.1) 263 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.044963+0000 mon.c (mon.1) 263 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.048725+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:25 vm05 bash[65415]: audit 2026-03-10T11:50:25.048725+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: cluster 2026-03-10T11:50:23.455617+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: cluster 2026-03-10T11:50:23.455617+0000 mgr.y (mgr.44107) 239 : cluster [DBG] pgmap v110: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.034024+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.034024+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.041099+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.041099+0000 mon.a (mon.0) 371 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.044074+0000 mon.c (mon.1) 262 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.044074+0000 mon.c (mon.1) 262 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.044963+0000 mon.c (mon.1) 263 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.044963+0000 mon.c (mon.1) 263 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.048725+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:25.351 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:25 vm07 bash[46158]: audit 2026-03-10T11:50:25.048725+0000 mon.a (mon.0) 372 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.095569+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.095569+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.097605+0000 mon.c (mon.1) 265 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.097605+0000 mon.c (mon.1) 265 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.099002+0000 mon.c (mon.1) 266 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.099002+0000 mon.c (mon.1) 266 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.100322+0000 mon.c (mon.1) 267 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.100322+0000 mon.c (mon.1) 267 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.101746+0000 mon.c (mon.1) 268 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.101746+0000 mon.c (mon.1) 268 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.101931+0000 mgr.y (mgr.44107) 240 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.101931+0000 mgr.y (mgr.44107) 240 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: cephadm 2026-03-10T11:50:25.102999+0000 mgr.y (mgr.44107) 241 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: cephadm 2026-03-10T11:50:25.102999+0000 mgr.y (mgr.44107) 241 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.506123+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.506123+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.510870+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.510870+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.511603+0000 mon.c (mon.1) 270 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 bash[46158]: audit 2026-03-10T11:50:25.511603+0000 mon.c (mon.1) 270 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.304 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.095569+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.095569+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.097605+0000 mon.c (mon.1) 265 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.097605+0000 mon.c (mon.1) 265 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.099002+0000 mon.c (mon.1) 266 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.099002+0000 mon.c (mon.1) 266 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.100322+0000 mon.c (mon.1) 267 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.100322+0000 mon.c (mon.1) 267 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.101746+0000 mon.c (mon.1) 268 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.101746+0000 mon.c (mon.1) 268 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.101931+0000 mgr.y (mgr.44107) 240 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.101931+0000 mgr.y (mgr.44107) 240 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: cephadm 2026-03-10T11:50:25.102999+0000 mgr.y (mgr.44107) 241 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: cephadm 2026-03-10T11:50:25.102999+0000 mgr.y (mgr.44107) 241 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.506123+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.506123+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.510870+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.510870+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.511603+0000 mon.c (mon.1) 270 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:26 vm05 bash[68966]: audit 2026-03-10T11:50:25.511603+0000 mon.c (mon.1) 270 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.095569+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.095569+0000 mon.c (mon.1) 264 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.097605+0000 mon.c (mon.1) 265 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.097605+0000 mon.c (mon.1) 265 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.099002+0000 mon.c (mon.1) 266 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.099002+0000 mon.c (mon.1) 266 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.100322+0000 mon.c (mon.1) 267 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.100322+0000 mon.c (mon.1) 267 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.101746+0000 mon.c (mon.1) 268 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.101746+0000 mon.c (mon.1) 268 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.101931+0000 mgr.y (mgr.44107) 240 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.101931+0000 mgr.y (mgr.44107) 240 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["4"], "max": 16}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: cephadm 2026-03-10T11:50:25.102999+0000 mgr.y (mgr.44107) 241 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: cephadm 2026-03-10T11:50:25.102999+0000 mgr.y (mgr.44107) 241 : cephadm [INF] Upgrade: osd.4 is safe to restart 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.506123+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.506123+0000 mon.a (mon.0) 373 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.510870+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.510870+0000 mon.c (mon.1) 269 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.511603+0000 mon.c (mon.1) 270 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:26.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:26 vm05 bash[65415]: audit 2026-03-10T11:50:25.511603+0000 mon.c (mon.1) 270 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:26.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:26 vm07 systemd[1]: Stopping Ceph osd.4 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:50:26.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:26 vm07 bash[20845]: debug 2026-03-10T11:50:26.342+0000 7fcb69fb5700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:50:26.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:26 vm07 bash[20845]: debug 2026-03-10T11:50:26.342+0000 7fcb69fb5700 -1 osd.4 122 *** Got signal Terminated *** 2026-03-10T11:50:26.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:26 vm07 bash[20845]: debug 2026-03-10T11:50:26.342+0000 7fcb69fb5700 -1 osd.4 122 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cluster 2026-03-10T11:50:25.456010+0000 mgr.y (mgr.44107) 242 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cluster 2026-03-10T11:50:25.456010+0000 mgr.y (mgr.44107) 242 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cephadm 2026-03-10T11:50:25.501591+0000 mgr.y (mgr.44107) 243 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cephadm 2026-03-10T11:50:25.501591+0000 mgr.y (mgr.44107) 243 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cephadm 2026-03-10T11:50:25.513147+0000 mgr.y (mgr.44107) 244 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cephadm 2026-03-10T11:50:25.513147+0000 mgr.y (mgr.44107) 244 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cluster 2026-03-10T11:50:26.350142+0000 mon.a (mon.0) 374 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T11:50:27.423 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 bash[46158]: cluster 2026-03-10T11:50:26.350142+0000 mon.a (mon.0) 374 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T11:50:27.424 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 bash[54167]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-4 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cluster 2026-03-10T11:50:25.456010+0000 mgr.y (mgr.44107) 242 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cluster 2026-03-10T11:50:25.456010+0000 mgr.y (mgr.44107) 242 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cephadm 2026-03-10T11:50:25.501591+0000 mgr.y (mgr.44107) 243 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cephadm 2026-03-10T11:50:25.501591+0000 mgr.y (mgr.44107) 243 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cephadm 2026-03-10T11:50:25.513147+0000 mgr.y (mgr.44107) 244 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cephadm 2026-03-10T11:50:25.513147+0000 mgr.y (mgr.44107) 244 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cluster 2026-03-10T11:50:26.350142+0000 mon.a (mon.0) 374 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T11:50:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:27 vm05 bash[68966]: cluster 2026-03-10T11:50:26.350142+0000 mon.a (mon.0) 374 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cluster 2026-03-10T11:50:25.456010+0000 mgr.y (mgr.44107) 242 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cluster 2026-03-10T11:50:25.456010+0000 mgr.y (mgr.44107) 242 : cluster [DBG] pgmap v111: 161 pgs: 161 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cephadm 2026-03-10T11:50:25.501591+0000 mgr.y (mgr.44107) 243 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cephadm 2026-03-10T11:50:25.501591+0000 mgr.y (mgr.44107) 243 : cephadm [INF] Upgrade: Updating osd.4 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cephadm 2026-03-10T11:50:25.513147+0000 mgr.y (mgr.44107) 244 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cephadm 2026-03-10T11:50:25.513147+0000 mgr.y (mgr.44107) 244 : cephadm [INF] Deploying daemon osd.4 on vm07 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cluster 2026-03-10T11:50:26.350142+0000 mon.a (mon.0) 374 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T11:50:27.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:27 vm05 bash[65415]: cluster 2026-03-10T11:50:26.350142+0000 mon.a (mon.0) 374 : cluster [INF] osd.4 marked itself down and dead 2026-03-10T11:50:27.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.696 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.4.service: Deactivated successfully. 2026-03-10T11:50:27.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: Stopped Ceph osd.4 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:27.696 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: Started Ceph osd.4 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:27.697 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:50:27 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:28.107 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 bash[54380]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:28.107 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:27 vm07 bash[54380]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: cluster 2026-03-10T11:50:27.067928+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: cluster 2026-03-10T11:50:27.067928+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: cluster 2026-03-10T11:50:27.085364+0000 mon.a (mon.0) 376 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: cluster 2026-03-10T11:50:27.085364+0000 mon.a (mon.0) 376 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: audit 2026-03-10T11:50:27.706346+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: audit 2026-03-10T11:50:27.706346+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: audit 2026-03-10T11:50:27.712795+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: audit 2026-03-10T11:50:27.712795+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: audit 2026-03-10T11:50:27.715992+0000 mon.c (mon.1) 271 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: audit 2026-03-10T11:50:27.715992+0000 mon.c (mon.1) 271 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: cluster 2026-03-10T11:50:28.077211+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T11:50:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:28 vm07 bash[46158]: cluster 2026-03-10T11:50:28.077211+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: cluster 2026-03-10T11:50:27.067928+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: cluster 2026-03-10T11:50:27.067928+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: cluster 2026-03-10T11:50:27.085364+0000 mon.a (mon.0) 376 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: cluster 2026-03-10T11:50:27.085364+0000 mon.a (mon.0) 376 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: audit 2026-03-10T11:50:27.706346+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: audit 2026-03-10T11:50:27.706346+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: audit 2026-03-10T11:50:27.712795+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: audit 2026-03-10T11:50:27.712795+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: audit 2026-03-10T11:50:27.715992+0000 mon.c (mon.1) 271 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: audit 2026-03-10T11:50:27.715992+0000 mon.c (mon.1) 271 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: cluster 2026-03-10T11:50:28.077211+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T11:50:28.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:28 vm05 bash[68966]: cluster 2026-03-10T11:50:28.077211+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: cluster 2026-03-10T11:50:27.067928+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: cluster 2026-03-10T11:50:27.067928+0000 mon.a (mon.0) 375 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: cluster 2026-03-10T11:50:27.085364+0000 mon.a (mon.0) 376 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: cluster 2026-03-10T11:50:27.085364+0000 mon.a (mon.0) 376 : cluster [DBG] osdmap e123: 8 total, 7 up, 8 in 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: audit 2026-03-10T11:50:27.706346+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: audit 2026-03-10T11:50:27.706346+0000 mon.a (mon.0) 377 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: audit 2026-03-10T11:50:27.712795+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: audit 2026-03-10T11:50:27.712795+0000 mon.a (mon.0) 378 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: audit 2026-03-10T11:50:27.715992+0000 mon.c (mon.1) 271 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: audit 2026-03-10T11:50:27.715992+0000 mon.c (mon.1) 271 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: cluster 2026-03-10T11:50:28.077211+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T11:50:28.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:28 vm05 bash[65415]: cluster 2026-03-10T11:50:28.077211+0000 mon.a (mon.0) 379 : cluster [DBG] osdmap e124: 8 total, 7 up, 8 in 2026-03-10T11:50:29.108 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:50:28] "GET /metrics HTTP/1.1" 200 37621 "" "Prometheus/2.51.0" 2026-03-10T11:50:29.109 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:28 vm07 bash[54380]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:50:29.109 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:28 vm07 bash[54380]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:29.109 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:28 vm07 bash[54380]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:29.109 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:28 vm07 bash[54380]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T11:50:29.109 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:28 vm07 bash[54380]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-ca46f2aa-d053-46b2-bde0-551edc991dc8/osd-block-5d2d7aab-4d36-465e-b574-aaa4de107693 --path /var/lib/ceph/osd/ceph-4 --no-mon-config 2026-03-10T11:50:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:29 vm07 bash[46158]: cluster 2026-03-10T11:50:27.456371+0000 mgr.y (mgr.44107) 245 : cluster [DBG] pgmap v113: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:50:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:29 vm07 bash[46158]: cluster 2026-03-10T11:50:27.456371+0000 mgr.y (mgr.44107) 245 : cluster [DBG] pgmap v113: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:50:29.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:29 vm07 bash[54380]: Running command: /usr/bin/ln -snf /dev/ceph-ca46f2aa-d053-46b2-bde0-551edc991dc8/osd-block-5d2d7aab-4d36-465e-b574-aaa4de107693 /var/lib/ceph/osd/ceph-4/block 2026-03-10T11:50:29.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:29 vm07 bash[54380]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block 2026-03-10T11:50:29.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:29 vm07 bash[54380]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 2026-03-10T11:50:29.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:29 vm07 bash[54380]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-10T11:50:29.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:29 vm07 bash[54380]: --> ceph-volume lvm activate successful for osd ID: 4 2026-03-10T11:50:29.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:29 vm05 bash[68966]: cluster 2026-03-10T11:50:27.456371+0000 mgr.y (mgr.44107) 245 : cluster [DBG] pgmap v113: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:50:29.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:29 vm05 bash[68966]: cluster 2026-03-10T11:50:27.456371+0000 mgr.y (mgr.44107) 245 : cluster [DBG] pgmap v113: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:50:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:29 vm05 bash[65415]: cluster 2026-03-10T11:50:27.456371+0000 mgr.y (mgr.44107) 245 : cluster [DBG] pgmap v113: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:50:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:29 vm05 bash[65415]: cluster 2026-03-10T11:50:27.456371+0000 mgr.y (mgr.44107) 245 : cluster [DBG] pgmap v113: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:50:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:30 vm07 bash[46158]: audit 2026-03-10T11:50:29.128943+0000 mgr.y (mgr.44107) 246 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:30.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:30 vm07 bash[46158]: audit 2026-03-10T11:50:29.128943+0000 mgr.y (mgr.44107) 246 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:30.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:29 vm07 bash[54734]: debug 2026-03-10T11:50:29.990+0000 7f1946806740 -1 Falling back to public interface 2026-03-10T11:50:30.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:30 vm05 bash[68966]: audit 2026-03-10T11:50:29.128943+0000 mgr.y (mgr.44107) 246 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:30.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:30 vm05 bash[68966]: audit 2026-03-10T11:50:29.128943+0000 mgr.y (mgr.44107) 246 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:30 vm05 bash[65415]: audit 2026-03-10T11:50:29.128943+0000 mgr.y (mgr.44107) 246 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:30.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:30 vm05 bash[65415]: audit 2026-03-10T11:50:29.128943+0000 mgr.y (mgr.44107) 246 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:31 vm07 bash[46158]: cluster 2026-03-10T11:50:29.456745+0000 mgr.y (mgr.44107) 247 : cluster [DBG] pgmap v115: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:31 vm07 bash[46158]: cluster 2026-03-10T11:50:29.456745+0000 mgr.y (mgr.44107) 247 : cluster [DBG] pgmap v115: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:31 vm07 bash[46158]: audit 2026-03-10T11:50:30.865329+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:31 vm07 bash[46158]: audit 2026-03-10T11:50:30.865329+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:31.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:31 vm07 bash[54734]: debug 2026-03-10T11:50:31.206+0000 7f1946806740 -1 osd.4 0 read_superblock omap replica is missing. 2026-03-10T11:50:31.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:31 vm07 bash[54734]: debug 2026-03-10T11:50:31.226+0000 7f1946806740 -1 osd.4 122 log_to_monitors true 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:31 vm05 bash[68966]: cluster 2026-03-10T11:50:29.456745+0000 mgr.y (mgr.44107) 247 : cluster [DBG] pgmap v115: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:31 vm05 bash[68966]: cluster 2026-03-10T11:50:29.456745+0000 mgr.y (mgr.44107) 247 : cluster [DBG] pgmap v115: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:31 vm05 bash[68966]: audit 2026-03-10T11:50:30.865329+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:31 vm05 bash[68966]: audit 2026-03-10T11:50:30.865329+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:31 vm05 bash[65415]: cluster 2026-03-10T11:50:29.456745+0000 mgr.y (mgr.44107) 247 : cluster [DBG] pgmap v115: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:31 vm05 bash[65415]: cluster 2026-03-10T11:50:29.456745+0000 mgr.y (mgr.44107) 247 : cluster [DBG] pgmap v115: 161 pgs: 24 stale+active+clean, 137 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:31 vm05 bash[65415]: audit 2026-03-10T11:50:30.865329+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:31 vm05 bash[65415]: audit 2026-03-10T11:50:30.865329+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:32.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:32 vm07 bash[46158]: audit 2026-03-10T11:50:31.233547+0000 mon.b (mon.2) 29 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:32 vm07 bash[46158]: audit 2026-03-10T11:50:31.233547+0000 mon.b (mon.2) 29 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:32 vm07 bash[46158]: audit 2026-03-10T11:50:31.237591+0000 mon.a (mon.0) 381 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:32 vm07 bash[46158]: audit 2026-03-10T11:50:31.237591+0000 mon.a (mon.0) 381 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:32 vm07 bash[54734]: debug 2026-03-10T11:50:32.154+0000 7f193e5b1640 -1 osd.4 122 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:32 vm05 bash[68966]: audit 2026-03-10T11:50:31.233547+0000 mon.b (mon.2) 29 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:32 vm05 bash[68966]: audit 2026-03-10T11:50:31.233547+0000 mon.b (mon.2) 29 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:32 vm05 bash[68966]: audit 2026-03-10T11:50:31.237591+0000 mon.a (mon.0) 381 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:32 vm05 bash[68966]: audit 2026-03-10T11:50:31.237591+0000 mon.a (mon.0) 381 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:32 vm05 bash[65415]: audit 2026-03-10T11:50:31.233547+0000 mon.b (mon.2) 29 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:32 vm05 bash[65415]: audit 2026-03-10T11:50:31.233547+0000 mon.b (mon.2) 29 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:32 vm05 bash[65415]: audit 2026-03-10T11:50:31.237591+0000 mon.a (mon.0) 381 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:32.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:32 vm05 bash[65415]: audit 2026-03-10T11:50:31.237591+0000 mon.a (mon.0) 381 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: cluster 2026-03-10T11:50:31.457213+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v116: 161 pgs: 19 active+undersized, 13 stale+active+clean, 12 active+undersized+degraded, 117 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: cluster 2026-03-10T11:50:31.457213+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v116: 161 pgs: 19 active+undersized, 13 stale+active+clean, 12 active+undersized+degraded, 117 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: cluster 2026-03-10T11:50:32.124521+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 44/627 objects degraded (7.018%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: cluster 2026-03-10T11:50:32.124521+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 44/627 objects degraded (7.018%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: audit 2026-03-10T11:50:32.132461+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: audit 2026-03-10T11:50:32.132461+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: cluster 2026-03-10T11:50:32.137396+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: cluster 2026-03-10T11:50:32.137396+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: audit 2026-03-10T11:50:32.138375+0000 mon.b (mon.2) 30 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: audit 2026-03-10T11:50:32.138375+0000 mon.b (mon.2) 30 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: audit 2026-03-10T11:50:32.142390+0000 mon.a (mon.0) 385 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:33 vm07 bash[46158]: audit 2026-03-10T11:50:32.142390+0000 mon.a (mon.0) 385 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: cluster 2026-03-10T11:50:31.457213+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v116: 161 pgs: 19 active+undersized, 13 stale+active+clean, 12 active+undersized+degraded, 117 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: cluster 2026-03-10T11:50:31.457213+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v116: 161 pgs: 19 active+undersized, 13 stale+active+clean, 12 active+undersized+degraded, 117 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: cluster 2026-03-10T11:50:32.124521+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 44/627 objects degraded (7.018%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: cluster 2026-03-10T11:50:32.124521+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 44/627 objects degraded (7.018%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: audit 2026-03-10T11:50:32.132461+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: audit 2026-03-10T11:50:32.132461+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: cluster 2026-03-10T11:50:32.137396+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: cluster 2026-03-10T11:50:32.137396+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: audit 2026-03-10T11:50:32.138375+0000 mon.b (mon.2) 30 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: audit 2026-03-10T11:50:32.138375+0000 mon.b (mon.2) 30 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: audit 2026-03-10T11:50:32.142390+0000 mon.a (mon.0) 385 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:33 vm05 bash[68966]: audit 2026-03-10T11:50:32.142390+0000 mon.a (mon.0) 385 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: cluster 2026-03-10T11:50:31.457213+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v116: 161 pgs: 19 active+undersized, 13 stale+active+clean, 12 active+undersized+degraded, 117 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: cluster 2026-03-10T11:50:31.457213+0000 mgr.y (mgr.44107) 248 : cluster [DBG] pgmap v116: 161 pgs: 19 active+undersized, 13 stale+active+clean, 12 active+undersized+degraded, 117 active+clean; 457 KiB data, 185 MiB used, 160 GiB / 160 GiB avail; 44/627 objects degraded (7.018%) 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: cluster 2026-03-10T11:50:32.124521+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 44/627 objects degraded (7.018%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: cluster 2026-03-10T11:50:32.124521+0000 mon.a (mon.0) 382 : cluster [WRN] Health check failed: Degraded data redundancy: 44/627 objects degraded (7.018%), 12 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: audit 2026-03-10T11:50:32.132461+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: audit 2026-03-10T11:50:32.132461+0000 mon.a (mon.0) 383 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: cluster 2026-03-10T11:50:32.137396+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: cluster 2026-03-10T11:50:32.137396+0000 mon.a (mon.0) 384 : cluster [DBG] osdmap e125: 8 total, 7 up, 8 in 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: audit 2026-03-10T11:50:32.138375+0000 mon.b (mon.2) 30 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: audit 2026-03-10T11:50:32.138375+0000 mon.b (mon.2) 30 : audit [INF] from='osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944]' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: audit 2026-03-10T11:50:32.142390+0000 mon.a (mon.0) 385 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:33.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:33 vm05 bash[65415]: audit 2026-03-10T11:50:32.142390+0000 mon.a (mon.0) 385 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: cluster 2026-03-10T11:50:33.143171+0000 mon.a (mon.0) 386 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: cluster 2026-03-10T11:50:33.143171+0000 mon.a (mon.0) 386 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: cluster 2026-03-10T11:50:33.170184+0000 mon.a (mon.0) 387 : cluster [INF] osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944] boot 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: cluster 2026-03-10T11:50:33.170184+0000 mon.a (mon.0) 387 : cluster [INF] osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944] boot 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: cluster 2026-03-10T11:50:33.170274+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: cluster 2026-03-10T11:50:33.170274+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: audit 2026-03-10T11:50:33.170300+0000 mon.c (mon.1) 272 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:50:34.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:34 vm07 bash[46158]: audit 2026-03-10T11:50:33.170300+0000 mon.c (mon.1) 272 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:50:34.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: cluster 2026-03-10T11:50:33.143171+0000 mon.a (mon.0) 386 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:34.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: cluster 2026-03-10T11:50:33.143171+0000 mon.a (mon.0) 386 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:34.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: cluster 2026-03-10T11:50:33.170184+0000 mon.a (mon.0) 387 : cluster [INF] osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944] boot 2026-03-10T11:50:34.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: cluster 2026-03-10T11:50:33.170184+0000 mon.a (mon.0) 387 : cluster [INF] osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944] boot 2026-03-10T11:50:34.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: cluster 2026-03-10T11:50:33.170274+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T11:50:34.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: cluster 2026-03-10T11:50:33.170274+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: audit 2026-03-10T11:50:33.170300+0000 mon.c (mon.1) 272 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:34 vm05 bash[68966]: audit 2026-03-10T11:50:33.170300+0000 mon.c (mon.1) 272 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: cluster 2026-03-10T11:50:33.143171+0000 mon.a (mon.0) 386 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: cluster 2026-03-10T11:50:33.143171+0000 mon.a (mon.0) 386 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: cluster 2026-03-10T11:50:33.170184+0000 mon.a (mon.0) 387 : cluster [INF] osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944] boot 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: cluster 2026-03-10T11:50:33.170184+0000 mon.a (mon.0) 387 : cluster [INF] osd.4 [v2:192.168.123.107:6800/482910944,v1:192.168.123.107:6801/482910944] boot 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: cluster 2026-03-10T11:50:33.170274+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: cluster 2026-03-10T11:50:33.170274+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e126: 8 total, 8 up, 8 in 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: audit 2026-03-10T11:50:33.170300+0000 mon.c (mon.1) 272 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:50:34.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:34 vm05 bash[65415]: audit 2026-03-10T11:50:33.170300+0000 mon.c (mon.1) 272 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: cluster 2026-03-10T11:50:33.457575+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v119: 161 pgs: 43 active+undersized, 26 active+undersized+degraded, 92 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 106/627 objects degraded (16.906%) 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: cluster 2026-03-10T11:50:33.457575+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v119: 161 pgs: 43 active+undersized, 26 active+undersized+degraded, 92 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 106/627 objects degraded (16.906%) 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: cluster 2026-03-10T11:50:34.165177+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: cluster 2026-03-10T11:50:34.165177+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:34.480044+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:34.480044+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:34.486002+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:34.486002+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:35.035446+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:35.035446+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:35.040472+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:35 vm07 bash[46158]: audit 2026-03-10T11:50:35.040472+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: cluster 2026-03-10T11:50:33.457575+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v119: 161 pgs: 43 active+undersized, 26 active+undersized+degraded, 92 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 106/627 objects degraded (16.906%) 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: cluster 2026-03-10T11:50:33.457575+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v119: 161 pgs: 43 active+undersized, 26 active+undersized+degraded, 92 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 106/627 objects degraded (16.906%) 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: cluster 2026-03-10T11:50:34.165177+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: cluster 2026-03-10T11:50:34.165177+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:34.480044+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:34.480044+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:34.486002+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:34.486002+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:35.035446+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:35.035446+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:35.040472+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:35 vm05 bash[68966]: audit 2026-03-10T11:50:35.040472+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: cluster 2026-03-10T11:50:33.457575+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v119: 161 pgs: 43 active+undersized, 26 active+undersized+degraded, 92 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 106/627 objects degraded (16.906%) 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: cluster 2026-03-10T11:50:33.457575+0000 mgr.y (mgr.44107) 249 : cluster [DBG] pgmap v119: 161 pgs: 43 active+undersized, 26 active+undersized+degraded, 92 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 106/627 objects degraded (16.906%) 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: cluster 2026-03-10T11:50:34.165177+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: cluster 2026-03-10T11:50:34.165177+0000 mon.a (mon.0) 389 : cluster [DBG] osdmap e127: 8 total, 8 up, 8 in 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:34.480044+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:34.480044+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:34.486002+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:34.486002+0000 mon.a (mon.0) 391 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:35.035446+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:35.035446+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:35.040472+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:35.591 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:35 vm05 bash[65415]: audit 2026-03-10T11:50:35.040472+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:36 vm07 bash[46158]: cluster 2026-03-10T11:50:35.457981+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v121: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 90/627 objects degraded (14.354%) 2026-03-10T11:50:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:36 vm07 bash[46158]: cluster 2026-03-10T11:50:35.457981+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v121: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 90/627 objects degraded (14.354%) 2026-03-10T11:50:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:36 vm07 bash[46158]: audit 2026-03-10T11:50:35.486526+0000 mon.c (mon.1) 273 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:36 vm07 bash[46158]: audit 2026-03-10T11:50:35.486526+0000 mon.c (mon.1) 273 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:36 vm05 bash[68966]: cluster 2026-03-10T11:50:35.457981+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v121: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 90/627 objects degraded (14.354%) 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:36 vm05 bash[68966]: cluster 2026-03-10T11:50:35.457981+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v121: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 90/627 objects degraded (14.354%) 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:36 vm05 bash[68966]: audit 2026-03-10T11:50:35.486526+0000 mon.c (mon.1) 273 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:36 vm05 bash[68966]: audit 2026-03-10T11:50:35.486526+0000 mon.c (mon.1) 273 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:36 vm05 bash[65415]: cluster 2026-03-10T11:50:35.457981+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v121: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 90/627 objects degraded (14.354%) 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:36 vm05 bash[65415]: cluster 2026-03-10T11:50:35.457981+0000 mgr.y (mgr.44107) 250 : cluster [DBG] pgmap v121: 161 pgs: 38 active+undersized, 22 active+undersized+degraded, 101 active+clean; 457 KiB data, 203 MiB used, 160 GiB / 160 GiB avail; 90/627 objects degraded (14.354%) 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:36 vm05 bash[65415]: audit 2026-03-10T11:50:35.486526+0000 mon.c (mon.1) 273 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:36.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:36 vm05 bash[65415]: audit 2026-03-10T11:50:35.486526+0000 mon.c (mon.1) 273 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:37 vm05 bash[68966]: cluster 2026-03-10T11:50:37.509023+0000 mon.a (mon.0) 394 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 90/627 objects degraded (14.354%), 22 pgs degraded) 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:37 vm05 bash[68966]: cluster 2026-03-10T11:50:37.509023+0000 mon.a (mon.0) 394 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 90/627 objects degraded (14.354%), 22 pgs degraded) 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:37 vm05 bash[68966]: cluster 2026-03-10T11:50:37.509044+0000 mon.a (mon.0) 395 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:37 vm05 bash[68966]: cluster 2026-03-10T11:50:37.509044+0000 mon.a (mon.0) 395 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:37 vm05 bash[65415]: cluster 2026-03-10T11:50:37.509023+0000 mon.a (mon.0) 394 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 90/627 objects degraded (14.354%), 22 pgs degraded) 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:37 vm05 bash[65415]: cluster 2026-03-10T11:50:37.509023+0000 mon.a (mon.0) 394 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 90/627 objects degraded (14.354%), 22 pgs degraded) 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:37 vm05 bash[65415]: cluster 2026-03-10T11:50:37.509044+0000 mon.a (mon.0) 395 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:37.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:37 vm05 bash[65415]: cluster 2026-03-10T11:50:37.509044+0000 mon.a (mon.0) 395 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:37.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:37 vm07 bash[46158]: cluster 2026-03-10T11:50:37.509023+0000 mon.a (mon.0) 394 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 90/627 objects degraded (14.354%), 22 pgs degraded) 2026-03-10T11:50:37.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:37 vm07 bash[46158]: cluster 2026-03-10T11:50:37.509023+0000 mon.a (mon.0) 394 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 90/627 objects degraded (14.354%), 22 pgs degraded) 2026-03-10T11:50:37.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:37 vm07 bash[46158]: cluster 2026-03-10T11:50:37.509044+0000 mon.a (mon.0) 395 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:37.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:37 vm07 bash[46158]: cluster 2026-03-10T11:50:37.509044+0000 mon.a (mon.0) 395 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:38.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:38 vm05 bash[68966]: cluster 2026-03-10T11:50:37.458420+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T11:50:38.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:38 vm05 bash[68966]: cluster 2026-03-10T11:50:37.458420+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T11:50:38.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:38 vm05 bash[65415]: cluster 2026-03-10T11:50:37.458420+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T11:50:38.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:38 vm05 bash[65415]: cluster 2026-03-10T11:50:37.458420+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T11:50:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:38 vm07 bash[46158]: cluster 2026-03-10T11:50:37.458420+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T11:50:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:38 vm07 bash[46158]: cluster 2026-03-10T11:50:37.458420+0000 mgr.y (mgr.44107) 251 : cluster [DBG] pgmap v122: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 0 op/s 2026-03-10T11:50:39.135 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:50:38] "GET /metrics HTTP/1.1" 200 37637 "" "Prometheus/2.51.0" 2026-03-10T11:50:39.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:39 vm05 bash[65415]: audit 2026-03-10T11:50:39.139580+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:39.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:39 vm05 bash[65415]: audit 2026-03-10T11:50:39.139580+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:39.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:39 vm05 bash[68966]: audit 2026-03-10T11:50:39.139580+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:39.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:39 vm05 bash[68966]: audit 2026-03-10T11:50:39.139580+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:39.849 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:39 vm07 bash[46158]: audit 2026-03-10T11:50:39.139580+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:39.849 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:39 vm07 bash[46158]: audit 2026-03-10T11:50:39.139580+0000 mgr.y (mgr.44107) 252 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:40.215 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (17m) 21s ago 24m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (4m) 6s ago 23m 66.8M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (5m) 21s ago 23m 44.2M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (4m) 6s ago 26m 467M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (14m) 21s ago 27m 532M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (3m) 21s ago 27m 49.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (3m) 6s ago 26m 47.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (3m) 21s ago 26m 45.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (17m) 21s ago 24m 8024k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (17m) 6s ago 24m 8019k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (63s) 21s ago 26m 46.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (26s) 21s ago 26m 22.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (116s) 21s ago 26m 46.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (2m) 21s ago 25m 68.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (11s) 6s ago 25m 22.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (25m) 6s ago 25m 55.9M 4096M 17.2.0 e1d6a67b021e bf6c3e870ec6 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (24m) 6s ago 24m 55.1M 4096M 17.2.0 e1d6a67b021e cb67459019f8 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (24m) 6s ago 24m 58.6M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:50:40.603 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (5m) 6s ago 24m 45.1M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:50:40.604 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (23m) 21s ago 23m 89.4M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:50:40.604 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (23m) 6s ago 23m 90.0M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3, 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 5 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:50:40.839 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:50:40.840 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:50:40.840 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:50:40.840 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 5, 2026-03-10T11:50:40.840 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 10 2026-03-10T11:50:40.840 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:50:40.840 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: cluster 2026-03-10T11:50:39.458707+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 699 B/s rd, 0 op/s 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: cluster 2026-03-10T11:50:39.458707+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 699 B/s rd, 0 op/s 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.542380+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.542380+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.549134+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.549134+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.552102+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.552102+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.552689+0000 mon.c (mon.1) 275 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.552689+0000 mon.c (mon.1) 275 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.559530+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:40 vm05 bash[65415]: audit 2026-03-10T11:50:40.559530+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: cluster 2026-03-10T11:50:39.458707+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 699 B/s rd, 0 op/s 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: cluster 2026-03-10T11:50:39.458707+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 699 B/s rd, 0 op/s 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.542380+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.542380+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.549134+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.549134+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.552102+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.552102+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.552689+0000 mon.c (mon.1) 275 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.552689+0000 mon.c (mon.1) 275 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.559530+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:40 vm05 bash[68966]: audit 2026-03-10T11:50:40.559530+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: cluster 2026-03-10T11:50:39.458707+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 699 B/s rd, 0 op/s 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: cluster 2026-03-10T11:50:39.458707+0000 mgr.y (mgr.44107) 253 : cluster [DBG] pgmap v123: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 699 B/s rd, 0 op/s 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.542380+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.542380+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.549134+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.549134+0000 mon.a (mon.0) 397 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.552102+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.552102+0000 mon.c (mon.1) 274 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.552689+0000 mon.c (mon.1) 275 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.552689+0000 mon.c (mon.1) 275 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.559530+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:40.895 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:40 vm07 bash[46158]: audit 2026-03-10T11:50:40.559530+0000 mon.a (mon.0) 398 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.040 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:50:41.040 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T11:50:41.040 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:50:41.040 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) crash,osd", 2026-03-10T11:50:41.041 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:50:41.041 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "5/8 daemons upgraded", 2026-03-10T11:50:41.041 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T11:50:41.041 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:50:41.041 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.209547+0000 mgr.y (mgr.44107) 254 : audit [DBG] from='client.54408 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.209547+0000 mgr.y (mgr.44107) 254 : audit [DBG] from='client.54408 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.398427+0000 mgr.y (mgr.44107) 255 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.398427+0000 mgr.y (mgr.44107) 255 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.603997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.603997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.614444+0000 mon.c (mon.1) 276 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.614444+0000 mon.c (mon.1) 276 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.615544+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.615544+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.616401+0000 mon.c (mon.1) 278 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.616401+0000 mon.c (mon.1) 278 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.617092+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.617092+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.617951+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.617951+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.618099+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.618099+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: cephadm 2026-03-10T11:50:40.618798+0000 mgr.y (mgr.44107) 258 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: cephadm 2026-03-10T11:50:40.618798+0000 mgr.y (mgr.44107) 258 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.840390+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.105:0/179897106' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:40.840390+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.105:0/179897106' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: cephadm 2026-03-10T11:50:41.026641+0000 mgr.y (mgr.44107) 259 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: cephadm 2026-03-10T11:50:41.026641+0000 mgr.y (mgr.44107) 259 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.030960+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.030960+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.035334+0000 mon.c (mon.1) 281 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.035334+0000 mon.c (mon.1) 281 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.035913+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.035913+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: cephadm 2026-03-10T11:50:41.037301+0000 mgr.y (mgr.44107) 260 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: cephadm 2026-03-10T11:50:41.037301+0000 mgr.y (mgr.44107) 260 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.045372+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='client.44389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.586 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 bash[46158]: audit 2026-03-10T11:50:41.045372+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='client.44389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.209547+0000 mgr.y (mgr.44107) 254 : audit [DBG] from='client.54408 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.209547+0000 mgr.y (mgr.44107) 254 : audit [DBG] from='client.54408 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.398427+0000 mgr.y (mgr.44107) 255 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.398427+0000 mgr.y (mgr.44107) 255 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.603997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.603997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.614444+0000 mon.c (mon.1) 276 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.614444+0000 mon.c (mon.1) 276 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.615544+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.615544+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.616401+0000 mon.c (mon.1) 278 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.616401+0000 mon.c (mon.1) 278 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.617092+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.617092+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.617951+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.617951+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.618099+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.618099+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: cephadm 2026-03-10T11:50:40.618798+0000 mgr.y (mgr.44107) 258 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: cephadm 2026-03-10T11:50:40.618798+0000 mgr.y (mgr.44107) 258 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.840390+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.105:0/179897106' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:40.840390+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.105:0/179897106' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: cephadm 2026-03-10T11:50:41.026641+0000 mgr.y (mgr.44107) 259 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: cephadm 2026-03-10T11:50:41.026641+0000 mgr.y (mgr.44107) 259 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.030960+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.030960+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.035334+0000 mon.c (mon.1) 281 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.035334+0000 mon.c (mon.1) 281 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.035913+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.035913+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: cephadm 2026-03-10T11:50:41.037301+0000 mgr.y (mgr.44107) 260 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: cephadm 2026-03-10T11:50:41.037301+0000 mgr.y (mgr.44107) 260 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.045372+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='client.44389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:41 vm05 bash[65415]: audit 2026-03-10T11:50:41.045372+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='client.44389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.209547+0000 mgr.y (mgr.44107) 254 : audit [DBG] from='client.54408 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.209547+0000 mgr.y (mgr.44107) 254 : audit [DBG] from='client.54408 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.398427+0000 mgr.y (mgr.44107) 255 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.398427+0000 mgr.y (mgr.44107) 255 : audit [DBG] from='client.54414 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.603997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.603997+0000 mgr.y (mgr.44107) 256 : audit [DBG] from='client.34397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.614444+0000 mon.c (mon.1) 276 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.614444+0000 mon.c (mon.1) 276 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.615544+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.615544+0000 mon.c (mon.1) 277 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.616401+0000 mon.c (mon.1) 278 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.616401+0000 mon.c (mon.1) 278 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.617092+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.617092+0000 mon.c (mon.1) 279 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.617951+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.617951+0000 mon.c (mon.1) 280 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.618099+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.618099+0000 mgr.y (mgr.44107) 257 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["5"], "max": 16}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: cephadm 2026-03-10T11:50:40.618798+0000 mgr.y (mgr.44107) 258 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: cephadm 2026-03-10T11:50:40.618798+0000 mgr.y (mgr.44107) 258 : cephadm [INF] Upgrade: osd.5 is safe to restart 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.840390+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.105:0/179897106' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:40.840390+0000 mon.b (mon.2) 31 : audit [DBG] from='client.? 192.168.123.105:0/179897106' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: cephadm 2026-03-10T11:50:41.026641+0000 mgr.y (mgr.44107) 259 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: cephadm 2026-03-10T11:50:41.026641+0000 mgr.y (mgr.44107) 259 : cephadm [INF] Upgrade: Updating osd.5 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.030960+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.030960+0000 mon.a (mon.0) 399 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.035334+0000 mon.c (mon.1) 281 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.035334+0000 mon.c (mon.1) 281 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.035913+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.035913+0000 mon.c (mon.1) 282 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: cephadm 2026-03-10T11:50:41.037301+0000 mgr.y (mgr.44107) 260 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: cephadm 2026-03-10T11:50:41.037301+0000 mgr.y (mgr.44107) 260 : cephadm [INF] Deploying daemon osd.5 on vm07 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.045372+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='client.44389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:41 vm05 bash[68966]: audit 2026-03-10T11:50:41.045372+0000 mgr.y (mgr.44107) 261 : audit [DBG] from='client.44389 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:50:41.850 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.850 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.850 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.850 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.850 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.851 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: Stopping Ceph osd.5 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:50:41.851 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.851 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.851 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:41.851 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:41 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:42.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:41 vm07 bash[24010]: debug 2026-03-10T11:50:41.846+0000 7efe0e09a700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:50:42.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:41 vm07 bash[24010]: debug 2026-03-10T11:50:41.846+0000 7efe0e09a700 -1 osd.5 127 *** Got signal Terminated *** 2026-03-10T11:50:42.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:41 vm07 bash[24010]: debug 2026-03-10T11:50:41.846+0000 7efe0e09a700 -1 osd.5 127 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:42 vm05 bash[65415]: cluster 2026-03-10T11:50:41.459129+0000 mgr.y (mgr.44107) 262 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:42 vm05 bash[65415]: cluster 2026-03-10T11:50:41.459129+0000 mgr.y (mgr.44107) 262 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:42 vm05 bash[65415]: cluster 2026-03-10T11:50:41.855727+0000 mon.a (mon.0) 400 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:42 vm05 bash[65415]: cluster 2026-03-10T11:50:41.855727+0000 mon.a (mon.0) 400 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:42 vm05 bash[68966]: cluster 2026-03-10T11:50:41.459129+0000 mgr.y (mgr.44107) 262 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:42 vm05 bash[68966]: cluster 2026-03-10T11:50:41.459129+0000 mgr.y (mgr.44107) 262 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:42 vm05 bash[68966]: cluster 2026-03-10T11:50:41.855727+0000 mon.a (mon.0) 400 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T11:50:42.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:42 vm05 bash[68966]: cluster 2026-03-10T11:50:41.855727+0000 mon.a (mon.0) 400 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T11:50:42.942 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:42 vm07 bash[46158]: cluster 2026-03-10T11:50:41.459129+0000 mgr.y (mgr.44107) 262 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:42.942 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:42 vm07 bash[46158]: cluster 2026-03-10T11:50:41.459129+0000 mgr.y (mgr.44107) 262 : cluster [DBG] pgmap v124: 161 pgs: 161 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:50:42.942 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:42 vm07 bash[46158]: cluster 2026-03-10T11:50:41.855727+0000 mon.a (mon.0) 400 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T11:50:42.942 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:42 vm07 bash[46158]: cluster 2026-03-10T11:50:41.855727+0000 mon.a (mon.0) 400 : cluster [INF] osd.5 marked itself down and dead 2026-03-10T11:50:42.942 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:42 vm07 bash[59019]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-5 2026-03-10T11:50:43.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:42 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.5.service: Deactivated successfully. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:42 vm07 systemd[1]: Stopped Ceph osd.5 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: Started Ceph osd.5 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.196 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.197 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:50:43 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:43.587 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:43 vm07 bash[59232]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:43.587 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:43 vm07 bash[59232]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: cluster 2026-03-10T11:50:42.587280+0000 mon.a (mon.0) 401 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: cluster 2026-03-10T11:50:42.587280+0000 mon.a (mon.0) 401 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: cluster 2026-03-10T11:50:42.624621+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: cluster 2026-03-10T11:50:42.624621+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: audit 2026-03-10T11:50:43.212983+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: audit 2026-03-10T11:50:43.212983+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: audit 2026-03-10T11:50:43.217498+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: audit 2026-03-10T11:50:43.217498+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: audit 2026-03-10T11:50:43.219424+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:43 vm05 bash[65415]: audit 2026-03-10T11:50:43.219424+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: cluster 2026-03-10T11:50:42.587280+0000 mon.a (mon.0) 401 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: cluster 2026-03-10T11:50:42.587280+0000 mon.a (mon.0) 401 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: cluster 2026-03-10T11:50:42.624621+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: cluster 2026-03-10T11:50:42.624621+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: audit 2026-03-10T11:50:43.212983+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: audit 2026-03-10T11:50:43.212983+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: audit 2026-03-10T11:50:43.217498+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: audit 2026-03-10T11:50:43.217498+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: audit 2026-03-10T11:50:43.219424+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:43.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:43 vm05 bash[68966]: audit 2026-03-10T11:50:43.219424+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: cluster 2026-03-10T11:50:42.587280+0000 mon.a (mon.0) 401 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: cluster 2026-03-10T11:50:42.587280+0000 mon.a (mon.0) 401 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: cluster 2026-03-10T11:50:42.624621+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: cluster 2026-03-10T11:50:42.624621+0000 mon.a (mon.0) 402 : cluster [DBG] osdmap e128: 8 total, 7 up, 8 in 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: audit 2026-03-10T11:50:43.212983+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: audit 2026-03-10T11:50:43.212983+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: audit 2026-03-10T11:50:43.217498+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: audit 2026-03-10T11:50:43.217498+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: audit 2026-03-10T11:50:43.219424+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:43 vm07 bash[46158]: audit 2026-03-10T11:50:43.219424+0000 mon.c (mon.1) 283 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:44.446 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:50:44.446 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:44.446 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:44.446 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T11:50:44.446 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-e232ca28-9c1c-4b68-8ca5-b6373da37232/osd-block-dcefdca8-8af9-4aeb-9472-1fb1d076fa1e --path /var/lib/ceph/osd/ceph-5 --no-mon-config 2026-03-10T11:50:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:44 vm07 bash[46158]: cluster 2026-03-10T11:50:43.459415+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v126: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:44 vm07 bash[46158]: cluster 2026-03-10T11:50:43.459415+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v126: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:44 vm07 bash[46158]: cluster 2026-03-10T11:50:43.625476+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T11:50:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:44 vm07 bash[46158]: cluster 2026-03-10T11:50:43.625476+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T11:50:44.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/ln -snf /dev/ceph-e232ca28-9c1c-4b68-8ca5-b6373da37232/osd-block-dcefdca8-8af9-4aeb-9472-1fb1d076fa1e /var/lib/ceph/osd/ceph-5/block 2026-03-10T11:50:44.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block 2026-03-10T11:50:44.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 2026-03-10T11:50:44.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-10T11:50:44.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:44 vm07 bash[59232]: --> ceph-volume lvm activate successful for osd ID: 5 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:44 vm05 bash[65415]: cluster 2026-03-10T11:50:43.459415+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v126: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:44 vm05 bash[65415]: cluster 2026-03-10T11:50:43.459415+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v126: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:44 vm05 bash[65415]: cluster 2026-03-10T11:50:43.625476+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:44 vm05 bash[65415]: cluster 2026-03-10T11:50:43.625476+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:44 vm05 bash[68966]: cluster 2026-03-10T11:50:43.459415+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v126: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:44 vm05 bash[68966]: cluster 2026-03-10T11:50:43.459415+0000 mgr.y (mgr.44107) 263 : cluster [DBG] pgmap v126: 161 pgs: 20 stale+active+clean, 141 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:44 vm05 bash[68966]: cluster 2026-03-10T11:50:43.625476+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T11:50:45.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:44 vm05 bash[68966]: cluster 2026-03-10T11:50:43.625476+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e129: 8 total, 7 up, 8 in 2026-03-10T11:50:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:45 vm07 bash[46158]: cluster 2026-03-10T11:50:45.604184+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:45 vm07 bash[46158]: cluster 2026-03-10T11:50:45.604184+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:45.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:45 vm07 bash[59578]: debug 2026-03-10T11:50:45.642+0000 7f0e3b427740 -1 Falling back to public interface 2026-03-10T11:50:46.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:45 vm05 bash[65415]: cluster 2026-03-10T11:50:45.604184+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:46.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:45 vm05 bash[65415]: cluster 2026-03-10T11:50:45.604184+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:46.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:45 vm05 bash[68966]: cluster 2026-03-10T11:50:45.604184+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:46.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:45 vm05 bash[68966]: cluster 2026-03-10T11:50:45.604184+0000 mon.a (mon.0) 406 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:46.872 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:46 vm07 bash[59578]: debug 2026-03-10T11:50:46.602+0000 7f0e3b427740 -1 osd.5 0 read_superblock omap replica is missing. 2026-03-10T11:50:46.872 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:46 vm07 bash[59578]: debug 2026-03-10T11:50:46.618+0000 7f0e3b427740 -1 osd.5 127 log_to_monitors true 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: cluster 2026-03-10T11:50:45.459795+0000 mgr.y (mgr.44107) 264 : cluster [DBG] pgmap v128: 161 pgs: 3 active+undersized, 19 stale+active+clean, 4 active+undersized+degraded, 135 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 10/627 objects degraded (1.595%) 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: cluster 2026-03-10T11:50:45.459795+0000 mgr.y (mgr.44107) 264 : cluster [DBG] pgmap v128: 161 pgs: 3 active+undersized, 19 stale+active+clean, 4 active+undersized+degraded, 135 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 10/627 objects degraded (1.595%) 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: audit 2026-03-10T11:50:45.874115+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: audit 2026-03-10T11:50:45.874115+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: audit 2026-03-10T11:50:46.627445+0000 mon.b (mon.2) 32 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: audit 2026-03-10T11:50:46.627445+0000 mon.b (mon.2) 32 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: audit 2026-03-10T11:50:46.631621+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:46 vm07 bash[46158]: audit 2026-03-10T11:50:46.631621+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:46 vm07 bash[59578]: debug 2026-03-10T11:50:46.902+0000 7f0e331d2640 -1 osd.5 127 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: cluster 2026-03-10T11:50:45.459795+0000 mgr.y (mgr.44107) 264 : cluster [DBG] pgmap v128: 161 pgs: 3 active+undersized, 19 stale+active+clean, 4 active+undersized+degraded, 135 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 10/627 objects degraded (1.595%) 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: cluster 2026-03-10T11:50:45.459795+0000 mgr.y (mgr.44107) 264 : cluster [DBG] pgmap v128: 161 pgs: 3 active+undersized, 19 stale+active+clean, 4 active+undersized+degraded, 135 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 10/627 objects degraded (1.595%) 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: audit 2026-03-10T11:50:45.874115+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: audit 2026-03-10T11:50:45.874115+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: audit 2026-03-10T11:50:46.627445+0000 mon.b (mon.2) 32 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: audit 2026-03-10T11:50:46.627445+0000 mon.b (mon.2) 32 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: audit 2026-03-10T11:50:46.631621+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:46 vm05 bash[65415]: audit 2026-03-10T11:50:46.631621+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: cluster 2026-03-10T11:50:45.459795+0000 mgr.y (mgr.44107) 264 : cluster [DBG] pgmap v128: 161 pgs: 3 active+undersized, 19 stale+active+clean, 4 active+undersized+degraded, 135 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 10/627 objects degraded (1.595%) 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: cluster 2026-03-10T11:50:45.459795+0000 mgr.y (mgr.44107) 264 : cluster [DBG] pgmap v128: 161 pgs: 3 active+undersized, 19 stale+active+clean, 4 active+undersized+degraded, 135 active+clean; 457 KiB data, 208 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s; 10/627 objects degraded (1.595%) 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: audit 2026-03-10T11:50:45.874115+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: audit 2026-03-10T11:50:45.874115+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: audit 2026-03-10T11:50:46.627445+0000 mon.b (mon.2) 32 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: audit 2026-03-10T11:50:46.627445+0000 mon.b (mon.2) 32 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: audit 2026-03-10T11:50:46.631621+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:47.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:46 vm05 bash[68966]: audit 2026-03-10T11:50:46.631621+0000 mon.a (mon.0) 408 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: audit 2026-03-10T11:50:46.884940+0000 mon.a (mon.0) 409 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: audit 2026-03-10T11:50:46.884940+0000 mon.a (mon.0) 409 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: audit 2026-03-10T11:50:46.887469+0000 mon.b (mon.2) 33 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: audit 2026-03-10T11:50:46.887469+0000 mon.b (mon.2) 33 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: cluster 2026-03-10T11:50:46.889192+0000 mon.a (mon.0) 410 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: cluster 2026-03-10T11:50:46.889192+0000 mon.a (mon.0) 410 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: audit 2026-03-10T11:50:46.891861+0000 mon.a (mon.0) 411 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:47 vm07 bash[46158]: audit 2026-03-10T11:50:46.891861+0000 mon.a (mon.0) 411 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: audit 2026-03-10T11:50:46.884940+0000 mon.a (mon.0) 409 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: audit 2026-03-10T11:50:46.884940+0000 mon.a (mon.0) 409 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: audit 2026-03-10T11:50:46.887469+0000 mon.b (mon.2) 33 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: audit 2026-03-10T11:50:46.887469+0000 mon.b (mon.2) 33 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: cluster 2026-03-10T11:50:46.889192+0000 mon.a (mon.0) 410 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: cluster 2026-03-10T11:50:46.889192+0000 mon.a (mon.0) 410 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: audit 2026-03-10T11:50:46.891861+0000 mon.a (mon.0) 411 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:47 vm05 bash[65415]: audit 2026-03-10T11:50:46.891861+0000 mon.a (mon.0) 411 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: audit 2026-03-10T11:50:46.884940+0000 mon.a (mon.0) 409 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: audit 2026-03-10T11:50:46.884940+0000 mon.a (mon.0) 409 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: audit 2026-03-10T11:50:46.887469+0000 mon.b (mon.2) 33 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: audit 2026-03-10T11:50:46.887469+0000 mon.b (mon.2) 33 : audit [INF] from='osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766]' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: cluster 2026-03-10T11:50:46.889192+0000 mon.a (mon.0) 410 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: cluster 2026-03-10T11:50:46.889192+0000 mon.a (mon.0) 410 : cluster [DBG] osdmap e130: 8 total, 7 up, 8 in 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: audit 2026-03-10T11:50:46.891861+0000 mon.a (mon.0) 411 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:48.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:47 vm05 bash[68966]: audit 2026-03-10T11:50:46.891861+0000 mon.a (mon.0) 411 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.460182+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v130: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.460182+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v130: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.886266+0000 mon.a (mon.0) 412 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.886266+0000 mon.a (mon.0) 412 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.913113+0000 mon.a (mon.0) 413 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766] boot 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.913113+0000 mon.a (mon.0) 413 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766] boot 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.913205+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: cluster 2026-03-10T11:50:47.913205+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: audit 2026-03-10T11:50:47.924765+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:48 vm05 bash[68966]: audit 2026-03-10T11:50:47.924765+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.460182+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v130: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.460182+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v130: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.886266+0000 mon.a (mon.0) 412 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.886266+0000 mon.a (mon.0) 412 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.913113+0000 mon.a (mon.0) 413 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766] boot 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.913113+0000 mon.a (mon.0) 413 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766] boot 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.913205+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T11:50:49.143 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: cluster 2026-03-10T11:50:47.913205+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T11:50:49.144 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: audit 2026-03-10T11:50:47.924765+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:50:49.144 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:48 vm05 bash[65415]: audit 2026-03-10T11:50:47.924765+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:50:49.144 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:50:48] "GET /metrics HTTP/1.1" 200 37637 "" "Prometheus/2.51.0" 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.460182+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v130: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.460182+0000 mgr.y (mgr.44107) 265 : cluster [DBG] pgmap v130: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.886266+0000 mon.a (mon.0) 412 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.886266+0000 mon.a (mon.0) 412 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.913113+0000 mon.a (mon.0) 413 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766] boot 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.913113+0000 mon.a (mon.0) 413 : cluster [INF] osd.5 [v2:192.168.123.107:6808/1091141766,v1:192.168.123.107:6809/1091141766] boot 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.913205+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: cluster 2026-03-10T11:50:47.913205+0000 mon.a (mon.0) 414 : cluster [DBG] osdmap e131: 8 total, 8 up, 8 in 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: audit 2026-03-10T11:50:47.924765+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:50:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:48 vm07 bash[46158]: audit 2026-03-10T11:50:47.924765+0000 mon.c (mon.1) 284 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: cluster 2026-03-10T11:50:48.898387+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: cluster 2026-03-10T11:50:48.898387+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: audit 2026-03-10T11:50:49.148026+0000 mgr.y (mgr.44107) 266 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: audit 2026-03-10T11:50:49.148026+0000 mgr.y (mgr.44107) 266 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: audit 2026-03-10T11:50:49.713252+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: audit 2026-03-10T11:50:49.713252+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: audit 2026-03-10T11:50:49.719824+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:49 vm07 bash[46158]: audit 2026-03-10T11:50:49.719824+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: cluster 2026-03-10T11:50:48.898387+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: cluster 2026-03-10T11:50:48.898387+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: audit 2026-03-10T11:50:49.148026+0000 mgr.y (mgr.44107) 266 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: audit 2026-03-10T11:50:49.148026+0000 mgr.y (mgr.44107) 266 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: audit 2026-03-10T11:50:49.713252+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: audit 2026-03-10T11:50:49.713252+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: audit 2026-03-10T11:50:49.719824+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:49 vm05 bash[68966]: audit 2026-03-10T11:50:49.719824+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: cluster 2026-03-10T11:50:48.898387+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: cluster 2026-03-10T11:50:48.898387+0000 mon.a (mon.0) 415 : cluster [DBG] osdmap e132: 8 total, 8 up, 8 in 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: audit 2026-03-10T11:50:49.148026+0000 mgr.y (mgr.44107) 266 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: audit 2026-03-10T11:50:49.148026+0000 mgr.y (mgr.44107) 266 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: audit 2026-03-10T11:50:49.713252+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: audit 2026-03-10T11:50:49.713252+0000 mon.a (mon.0) 416 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: audit 2026-03-10T11:50:49.719824+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:50.299 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:49 vm05 bash[65415]: audit 2026-03-10T11:50:49.719824+0000 mon.a (mon.0) 417 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: cluster 2026-03-10T11:50:49.460584+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v133: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: cluster 2026-03-10T11:50:49.460584+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v133: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: audit 2026-03-10T11:50:50.263982+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: audit 2026-03-10T11:50:50.263982+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: audit 2026-03-10T11:50:50.269502+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: audit 2026-03-10T11:50:50.269502+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: audit 2026-03-10T11:50:50.486709+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: audit 2026-03-10T11:50:50.486709+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: cluster 2026-03-10T11:50:50.671165+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 74/627 objects degraded (11.802%), 22 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:50 vm05 bash[68966]: cluster 2026-03-10T11:50:50.671165+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 74/627 objects degraded (11.802%), 22 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: cluster 2026-03-10T11:50:49.460584+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v133: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: cluster 2026-03-10T11:50:49.460584+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v133: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: audit 2026-03-10T11:50:50.263982+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: audit 2026-03-10T11:50:50.263982+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: audit 2026-03-10T11:50:50.269502+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: audit 2026-03-10T11:50:50.269502+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: audit 2026-03-10T11:50:50.486709+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: audit 2026-03-10T11:50:50.486709+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: cluster 2026-03-10T11:50:50.671165+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 74/627 objects degraded (11.802%), 22 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:50 vm05 bash[65415]: cluster 2026-03-10T11:50:50.671165+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 74/627 objects degraded (11.802%), 22 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: cluster 2026-03-10T11:50:49.460584+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v133: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: cluster 2026-03-10T11:50:49.460584+0000 mgr.y (mgr.44107) 267 : cluster [DBG] pgmap v133: 161 pgs: 36 active+undersized, 22 active+undersized+degraded, 103 active+clean; 457 KiB data, 226 MiB used, 160 GiB / 160 GiB avail; 74/627 objects degraded (11.802%) 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: audit 2026-03-10T11:50:50.263982+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: audit 2026-03-10T11:50:50.263982+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: audit 2026-03-10T11:50:50.269502+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: audit 2026-03-10T11:50:50.269502+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: audit 2026-03-10T11:50:50.486709+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: audit 2026-03-10T11:50:50.486709+0000 mon.c (mon.1) 285 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: cluster 2026-03-10T11:50:50.671165+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 74/627 objects degraded (11.802%), 22 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:51.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:50 vm07 bash[46158]: cluster 2026-03-10T11:50:50.671165+0000 mon.a (mon.0) 420 : cluster [WRN] Health check update: Degraded data redundancy: 74/627 objects degraded (11.802%), 22 pgs degraded (PG_DEGRADED) 2026-03-10T11:50:53.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:52 vm05 bash[68966]: cluster 2026-03-10T11:50:51.460974+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v134: 161 pgs: 25 active+undersized, 14 active+undersized+degraded, 122 active+clean; 457 KiB data, 230 MiB used, 160 GiB / 160 GiB avail; 46/627 objects degraded (7.337%) 2026-03-10T11:50:53.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:52 vm05 bash[68966]: cluster 2026-03-10T11:50:51.460974+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v134: 161 pgs: 25 active+undersized, 14 active+undersized+degraded, 122 active+clean; 457 KiB data, 230 MiB used, 160 GiB / 160 GiB avail; 46/627 objects degraded (7.337%) 2026-03-10T11:50:53.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:52 vm05 bash[65415]: cluster 2026-03-10T11:50:51.460974+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v134: 161 pgs: 25 active+undersized, 14 active+undersized+degraded, 122 active+clean; 457 KiB data, 230 MiB used, 160 GiB / 160 GiB avail; 46/627 objects degraded (7.337%) 2026-03-10T11:50:53.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:52 vm05 bash[65415]: cluster 2026-03-10T11:50:51.460974+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v134: 161 pgs: 25 active+undersized, 14 active+undersized+degraded, 122 active+clean; 457 KiB data, 230 MiB used, 160 GiB / 160 GiB avail; 46/627 objects degraded (7.337%) 2026-03-10T11:50:53.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:52 vm07 bash[46158]: cluster 2026-03-10T11:50:51.460974+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v134: 161 pgs: 25 active+undersized, 14 active+undersized+degraded, 122 active+clean; 457 KiB data, 230 MiB used, 160 GiB / 160 GiB avail; 46/627 objects degraded (7.337%) 2026-03-10T11:50:53.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:52 vm07 bash[46158]: cluster 2026-03-10T11:50:51.460974+0000 mgr.y (mgr.44107) 268 : cluster [DBG] pgmap v134: 161 pgs: 25 active+undersized, 14 active+undersized+degraded, 122 active+clean; 457 KiB data, 230 MiB used, 160 GiB / 160 GiB avail; 46/627 objects degraded (7.337%) 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:54 vm05 bash[68966]: cluster 2026-03-10T11:50:53.977083+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 46/627 objects degraded (7.337%), 14 pgs degraded) 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:54 vm05 bash[68966]: cluster 2026-03-10T11:50:53.977083+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 46/627 objects degraded (7.337%), 14 pgs degraded) 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:54 vm05 bash[68966]: cluster 2026-03-10T11:50:53.977098+0000 mon.a (mon.0) 422 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:54 vm05 bash[68966]: cluster 2026-03-10T11:50:53.977098+0000 mon.a (mon.0) 422 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:54 vm05 bash[65415]: cluster 2026-03-10T11:50:53.977083+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 46/627 objects degraded (7.337%), 14 pgs degraded) 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:54 vm05 bash[65415]: cluster 2026-03-10T11:50:53.977083+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 46/627 objects degraded (7.337%), 14 pgs degraded) 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:54 vm05 bash[65415]: cluster 2026-03-10T11:50:53.977098+0000 mon.a (mon.0) 422 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:54.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:54 vm05 bash[65415]: cluster 2026-03-10T11:50:53.977098+0000 mon.a (mon.0) 422 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:54 vm07 bash[46158]: cluster 2026-03-10T11:50:53.977083+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 46/627 objects degraded (7.337%), 14 pgs degraded) 2026-03-10T11:50:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:54 vm07 bash[46158]: cluster 2026-03-10T11:50:53.977083+0000 mon.a (mon.0) 421 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 46/627 objects degraded (7.337%), 14 pgs degraded) 2026-03-10T11:50:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:54 vm07 bash[46158]: cluster 2026-03-10T11:50:53.977098+0000 mon.a (mon.0) 422 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:54.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:54 vm07 bash[46158]: cluster 2026-03-10T11:50:53.977098+0000 mon.a (mon.0) 422 : cluster [INF] Cluster is now healthy 2026-03-10T11:50:55.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:55 vm05 bash[68966]: cluster 2026-03-10T11:50:53.461338+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 467 B/s rd, 0 op/s 2026-03-10T11:50:55.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:55 vm05 bash[68966]: cluster 2026-03-10T11:50:53.461338+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 467 B/s rd, 0 op/s 2026-03-10T11:50:55.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:55 vm05 bash[65415]: cluster 2026-03-10T11:50:53.461338+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 467 B/s rd, 0 op/s 2026-03-10T11:50:55.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:55 vm05 bash[65415]: cluster 2026-03-10T11:50:53.461338+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 467 B/s rd, 0 op/s 2026-03-10T11:50:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:55 vm07 bash[46158]: cluster 2026-03-10T11:50:53.461338+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 467 B/s rd, 0 op/s 2026-03-10T11:50:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:55 vm07 bash[46158]: cluster 2026-03-10T11:50:53.461338+0000 mgr.y (mgr.44107) 269 : cluster [DBG] pgmap v135: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 467 B/s rd, 0 op/s 2026-03-10T11:50:56.842 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: cluster 2026-03-10T11:50:55.461695+0000 mgr.y (mgr.44107) 270 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: cluster 2026-03-10T11:50:55.461695+0000 mgr.y (mgr.44107) 270 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.845176+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.845176+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.849698+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.849698+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.851210+0000 mon.c (mon.1) 286 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.851210+0000 mon.c (mon.1) 286 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.852116+0000 mon.c (mon.1) 287 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.852116+0000 mon.c (mon.1) 287 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.856213+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.856213+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.896480+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.896480+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.898001+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.898001+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.899175+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.899175+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.900291+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.900291+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.901593+0000 mon.c (mon.1) 292 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.901593+0000 mon.c (mon.1) 292 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.901970+0000 mgr.y (mgr.44107) 271 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:55.901970+0000 mgr.y (mgr.44107) 271 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: cephadm 2026-03-10T11:50:55.902517+0000 mgr.y (mgr.44107) 272 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: cephadm 2026-03-10T11:50:55.902517+0000 mgr.y (mgr.44107) 272 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:56.288128+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:56.288128+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:56.292242+0000 mon.c (mon.1) 293 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:56.292242+0000 mon.c (mon.1) 293 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:56.293162+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:56.843 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 bash[46158]: audit 2026-03-10T11:50:56.293162+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:56 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:57 vm07 systemd[1]: Stopping Ceph osd.6 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:57 vm07 bash[27159]: debug 2026-03-10T11:50:57.066+0000 7fda58563700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:57 vm07 bash[27159]: debug 2026-03-10T11:50:57.066+0000 7fda58563700 -1 osd.6 132 *** Got signal Terminated *** 2026-03-10T11:50:57.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:57 vm07 bash[27159]: debug 2026-03-10T11:50:57.066+0000 7fda58563700 -1 osd.6 132 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: cluster 2026-03-10T11:50:55.461695+0000 mgr.y (mgr.44107) 270 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: cluster 2026-03-10T11:50:55.461695+0000 mgr.y (mgr.44107) 270 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.845176+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.845176+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.849698+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.849698+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.851210+0000 mon.c (mon.1) 286 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.851210+0000 mon.c (mon.1) 286 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.852116+0000 mon.c (mon.1) 287 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.852116+0000 mon.c (mon.1) 287 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.856213+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.856213+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.896480+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.896480+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.898001+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.898001+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.899175+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.899175+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.900291+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.900291+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.901593+0000 mon.c (mon.1) 292 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.901593+0000 mon.c (mon.1) 292 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.901970+0000 mgr.y (mgr.44107) 271 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:55.901970+0000 mgr.y (mgr.44107) 271 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: cephadm 2026-03-10T11:50:55.902517+0000 mgr.y (mgr.44107) 272 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: cephadm 2026-03-10T11:50:55.902517+0000 mgr.y (mgr.44107) 272 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:56.288128+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:56.288128+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:56.292242+0000 mon.c (mon.1) 293 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:56.292242+0000 mon.c (mon.1) 293 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:56.293162+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:56 vm05 bash[68966]: audit 2026-03-10T11:50:56.293162+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: cluster 2026-03-10T11:50:55.461695+0000 mgr.y (mgr.44107) 270 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: cluster 2026-03-10T11:50:55.461695+0000 mgr.y (mgr.44107) 270 : cluster [DBG] pgmap v136: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.845176+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.845176+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.849698+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.849698+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.851210+0000 mon.c (mon.1) 286 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.851210+0000 mon.c (mon.1) 286 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.852116+0000 mon.c (mon.1) 287 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.852116+0000 mon.c (mon.1) 287 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.856213+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.856213+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.896480+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.896480+0000 mon.c (mon.1) 288 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.898001+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.898001+0000 mon.c (mon.1) 289 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.899175+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.899175+0000 mon.c (mon.1) 290 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.900291+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.900291+0000 mon.c (mon.1) 291 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.901593+0000 mon.c (mon.1) 292 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.901593+0000 mon.c (mon.1) 292 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.901970+0000 mgr.y (mgr.44107) 271 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:55.901970+0000 mgr.y (mgr.44107) 271 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["6"], "max": 16}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: cephadm 2026-03-10T11:50:55.902517+0000 mgr.y (mgr.44107) 272 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: cephadm 2026-03-10T11:50:55.902517+0000 mgr.y (mgr.44107) 272 : cephadm [INF] Upgrade: osd.6 is safe to restart 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:56.288128+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:56.288128+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:56.292242+0000 mon.c (mon.1) 293 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:56.292242+0000 mon.c (mon.1) 293 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:56.293162+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:57.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:56 vm05 bash[65415]: audit 2026-03-10T11:50:56.293162+0000 mon.c (mon.1) 294 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:50:58.181 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:57 vm07 bash[46158]: cephadm 2026-03-10T11:50:56.283534+0000 mgr.y (mgr.44107) 273 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T11:50:58.181 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:57 vm07 bash[46158]: cephadm 2026-03-10T11:50:56.283534+0000 mgr.y (mgr.44107) 273 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T11:50:58.181 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:57 vm07 bash[46158]: cephadm 2026-03-10T11:50:56.295398+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:50:58.181 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:57 vm07 bash[46158]: cephadm 2026-03-10T11:50:56.295398+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:50:58.181 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:57 vm07 bash[46158]: cluster 2026-03-10T11:50:57.076348+0000 mon.a (mon.0) 427 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T11:50:58.181 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:57 vm07 bash[46158]: cluster 2026-03-10T11:50:57.076348+0000 mon.a (mon.0) 427 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T11:50:58.181 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:57 vm07 bash[63876]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-6 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:57 vm05 bash[65415]: cephadm 2026-03-10T11:50:56.283534+0000 mgr.y (mgr.44107) 273 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:57 vm05 bash[65415]: cephadm 2026-03-10T11:50:56.283534+0000 mgr.y (mgr.44107) 273 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:57 vm05 bash[65415]: cephadm 2026-03-10T11:50:56.295398+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:57 vm05 bash[65415]: cephadm 2026-03-10T11:50:56.295398+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:57 vm05 bash[65415]: cluster 2026-03-10T11:50:57.076348+0000 mon.a (mon.0) 427 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:57 vm05 bash[65415]: cluster 2026-03-10T11:50:57.076348+0000 mon.a (mon.0) 427 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:57 vm05 bash[68966]: cephadm 2026-03-10T11:50:56.283534+0000 mgr.y (mgr.44107) 273 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:57 vm05 bash[68966]: cephadm 2026-03-10T11:50:56.283534+0000 mgr.y (mgr.44107) 273 : cephadm [INF] Upgrade: Updating osd.6 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:57 vm05 bash[68966]: cephadm 2026-03-10T11:50:56.295398+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:57 vm05 bash[68966]: cephadm 2026-03-10T11:50:56.295398+0000 mgr.y (mgr.44107) 274 : cephadm [INF] Deploying daemon osd.6 on vm07 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:57 vm05 bash[68966]: cluster 2026-03-10T11:50:57.076348+0000 mon.a (mon.0) 427 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T11:50:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:57 vm05 bash[68966]: cluster 2026-03-10T11:50:57.076348+0000 mon.a (mon.0) 427 : cluster [INF] osd.6 marked itself down and dead 2026-03-10T11:50:58.445 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.6.service: Deactivated successfully. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: Stopped Ceph osd.6 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: Started Ceph osd.6 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:50:58.446 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:50:58 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:50:58.845 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:58 vm07 bash[64084]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:58.845 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:58 vm07 bash[64084]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: cluster 2026-03-10T11:50:57.462165+0000 mgr.y (mgr.44107) 275 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 963 B/s rd, 0 op/s 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: cluster 2026-03-10T11:50:57.462165+0000 mgr.y (mgr.44107) 275 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 963 B/s rd, 0 op/s 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: cluster 2026-03-10T11:50:57.845709+0000 mon.a (mon.0) 428 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: cluster 2026-03-10T11:50:57.845709+0000 mon.a (mon.0) 428 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: cluster 2026-03-10T11:50:57.882549+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: cluster 2026-03-10T11:50:57.882549+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: audit 2026-03-10T11:50:58.452985+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: audit 2026-03-10T11:50:58.452985+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: audit 2026-03-10T11:50:58.460962+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: audit 2026-03-10T11:50:58.460962+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.153 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: audit 2026-03-10T11:50:58.461947+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:58 vm05 bash[65415]: audit 2026-03-10T11:50:58.461947+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:50:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:50:58] "GET /metrics HTTP/1.1" 200 37649 "" "Prometheus/2.51.0" 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: cluster 2026-03-10T11:50:57.462165+0000 mgr.y (mgr.44107) 275 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 963 B/s rd, 0 op/s 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: cluster 2026-03-10T11:50:57.462165+0000 mgr.y (mgr.44107) 275 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 963 B/s rd, 0 op/s 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: cluster 2026-03-10T11:50:57.845709+0000 mon.a (mon.0) 428 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: cluster 2026-03-10T11:50:57.845709+0000 mon.a (mon.0) 428 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: cluster 2026-03-10T11:50:57.882549+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: cluster 2026-03-10T11:50:57.882549+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: audit 2026-03-10T11:50:58.452985+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: audit 2026-03-10T11:50:58.452985+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: audit 2026-03-10T11:50:58.460962+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: audit 2026-03-10T11:50:58.460962+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: audit 2026-03-10T11:50:58.461947+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:59.154 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:58 vm05 bash[68966]: audit 2026-03-10T11:50:58.461947+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: cluster 2026-03-10T11:50:57.462165+0000 mgr.y (mgr.44107) 275 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 963 B/s rd, 0 op/s 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: cluster 2026-03-10T11:50:57.462165+0000 mgr.y (mgr.44107) 275 : cluster [DBG] pgmap v137: 161 pgs: 161 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 963 B/s rd, 0 op/s 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: cluster 2026-03-10T11:50:57.845709+0000 mon.a (mon.0) 428 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: cluster 2026-03-10T11:50:57.845709+0000 mon.a (mon.0) 428 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: cluster 2026-03-10T11:50:57.882549+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: cluster 2026-03-10T11:50:57.882549+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e133: 8 total, 7 up, 8 in 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: audit 2026-03-10T11:50:58.452985+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: audit 2026-03-10T11:50:58.452985+0000 mon.a (mon.0) 430 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: audit 2026-03-10T11:50:58.460962+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: audit 2026-03-10T11:50:58.460962+0000 mon.a (mon.0) 431 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: audit 2026-03-10T11:50:58.461947+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:59.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:58 vm07 bash[46158]: audit 2026-03-10T11:50:58.461947+0000 mon.c (mon.1) 295 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:50:59.826 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:50:59.826 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:59.826 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:50:59.826 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-10T11:50:59.826 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-1288f8a1-a21a-4636-b7c5-7f3ebe3cf2e1/osd-block-783416c9-d1a2-4d8f-91e5-b6343f3a3d0a --path /var/lib/ceph/osd/ceph-6 --no-mon-config 2026-03-10T11:51:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:59 vm07 bash[46158]: cluster 2026-03-10T11:50:58.884454+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e134: 8 total, 7 up, 8 in 2026-03-10T11:51:00.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:50:59 vm07 bash[46158]: cluster 2026-03-10T11:50:58.884454+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e134: 8 total, 7 up, 8 in 2026-03-10T11:51:00.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/ln -snf /dev/ceph-1288f8a1-a21a-4636-b7c5-7f3ebe3cf2e1/osd-block-783416c9-d1a2-4d8f-91e5-b6343f3a3d0a /var/lib/ceph/osd/ceph-6/block 2026-03-10T11:51:00.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block 2026-03-10T11:51:00.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2 2026-03-10T11:51:00.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 2026-03-10T11:51:00.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64084]: --> ceph-volume lvm activate successful for osd ID: 6 2026-03-10T11:51:00.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:50:59 vm07 bash[64441]: debug 2026-03-10T11:50:59.982+0000 7f86d9cca640 1 -- 192.168.123.107:0/2471858991 <== mon.2 v2:192.168.123.107:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x5640046b1680 con 0x5640038bfc00 2026-03-10T11:51:00.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:59 vm05 bash[65415]: cluster 2026-03-10T11:50:58.884454+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e134: 8 total, 7 up, 8 in 2026-03-10T11:51:00.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:50:59 vm05 bash[65415]: cluster 2026-03-10T11:50:58.884454+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e134: 8 total, 7 up, 8 in 2026-03-10T11:51:00.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:59 vm05 bash[68966]: cluster 2026-03-10T11:50:58.884454+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e134: 8 total, 7 up, 8 in 2026-03-10T11:51:00.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:50:59 vm05 bash[68966]: cluster 2026-03-10T11:50:58.884454+0000 mon.a (mon.0) 432 : cluster [DBG] osdmap e134: 8 total, 7 up, 8 in 2026-03-10T11:51:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:00 vm07 bash[46158]: audit 2026-03-10T11:50:59.158331+0000 mgr.y (mgr.44107) 276 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:00 vm07 bash[46158]: audit 2026-03-10T11:50:59.158331+0000 mgr.y (mgr.44107) 276 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:00 vm07 bash[46158]: cluster 2026-03-10T11:50:59.462442+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v140: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:00 vm07 bash[46158]: cluster 2026-03-10T11:50:59.462442+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v140: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:00.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:00 vm07 bash[64441]: debug 2026-03-10T11:51:00.682+0000 7f86dc534740 -1 Falling back to public interface 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:00 vm05 bash[65415]: audit 2026-03-10T11:50:59.158331+0000 mgr.y (mgr.44107) 276 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:00 vm05 bash[65415]: audit 2026-03-10T11:50:59.158331+0000 mgr.y (mgr.44107) 276 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:00 vm05 bash[65415]: cluster 2026-03-10T11:50:59.462442+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v140: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:00 vm05 bash[65415]: cluster 2026-03-10T11:50:59.462442+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v140: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:00 vm05 bash[68966]: audit 2026-03-10T11:50:59.158331+0000 mgr.y (mgr.44107) 276 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:00 vm05 bash[68966]: audit 2026-03-10T11:50:59.158331+0000 mgr.y (mgr.44107) 276 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:00 vm05 bash[68966]: cluster 2026-03-10T11:50:59.462442+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v140: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:01.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:00 vm05 bash[68966]: cluster 2026-03-10T11:50:59.462442+0000 mgr.y (mgr.44107) 277 : cluster [DBG] pgmap v140: 161 pgs: 13 stale+active+clean, 148 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:01 vm07 bash[46158]: audit 2026-03-10T11:51:00.885495+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:02.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:01 vm07 bash[46158]: audit 2026-03-10T11:51:00.885495+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:02.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:01 vm07 bash[64441]: debug 2026-03-10T11:51:01.910+0000 7f86dc534740 -1 osd.6 0 read_superblock omap replica is missing. 2026-03-10T11:51:02.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:01 vm07 bash[64441]: debug 2026-03-10T11:51:01.938+0000 7f86dc534740 -1 osd.6 132 log_to_monitors true 2026-03-10T11:51:02.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:01 vm05 bash[65415]: audit 2026-03-10T11:51:00.885495+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:02.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:01 vm05 bash[65415]: audit 2026-03-10T11:51:00.885495+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:02.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:01 vm05 bash[68966]: audit 2026-03-10T11:51:00.885495+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:02.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:01 vm05 bash[68966]: audit 2026-03-10T11:51:00.885495+0000 mon.a (mon.0) 433 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: cluster 2026-03-10T11:51:01.462972+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v141: 161 pgs: 13 active+undersized, 10 stale+active+clean, 6 active+undersized+degraded, 132 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 895 B/s rd, 0 op/s; 23/627 objects degraded (3.668%) 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: cluster 2026-03-10T11:51:01.462972+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v141: 161 pgs: 13 active+undersized, 10 stale+active+clean, 6 active+undersized+degraded, 132 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 895 B/s rd, 0 op/s; 23/627 objects degraded (3.668%) 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: cluster 2026-03-10T11:51:01.882659+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Degraded data redundancy: 23/627 objects degraded (3.668%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: cluster 2026-03-10T11:51:01.882659+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Degraded data redundancy: 23/627 objects degraded (3.668%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: audit 2026-03-10T11:51:01.952510+0000 mon.c (mon.1) 296 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: audit 2026-03-10T11:51:01.952510+0000 mon.c (mon.1) 296 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: audit 2026-03-10T11:51:01.952815+0000 mon.a (mon.0) 435 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:02 vm07 bash[46158]: audit 2026-03-10T11:51:01.952815+0000 mon.a (mon.0) 435 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: cluster 2026-03-10T11:51:01.462972+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v141: 161 pgs: 13 active+undersized, 10 stale+active+clean, 6 active+undersized+degraded, 132 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 895 B/s rd, 0 op/s; 23/627 objects degraded (3.668%) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: cluster 2026-03-10T11:51:01.462972+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v141: 161 pgs: 13 active+undersized, 10 stale+active+clean, 6 active+undersized+degraded, 132 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 895 B/s rd, 0 op/s; 23/627 objects degraded (3.668%) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: cluster 2026-03-10T11:51:01.882659+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Degraded data redundancy: 23/627 objects degraded (3.668%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: cluster 2026-03-10T11:51:01.882659+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Degraded data redundancy: 23/627 objects degraded (3.668%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: audit 2026-03-10T11:51:01.952510+0000 mon.c (mon.1) 296 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: audit 2026-03-10T11:51:01.952510+0000 mon.c (mon.1) 296 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: audit 2026-03-10T11:51:01.952815+0000 mon.a (mon.0) 435 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:02 vm05 bash[65415]: audit 2026-03-10T11:51:01.952815+0000 mon.a (mon.0) 435 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: cluster 2026-03-10T11:51:01.462972+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v141: 161 pgs: 13 active+undersized, 10 stale+active+clean, 6 active+undersized+degraded, 132 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 895 B/s rd, 0 op/s; 23/627 objects degraded (3.668%) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: cluster 2026-03-10T11:51:01.462972+0000 mgr.y (mgr.44107) 278 : cluster [DBG] pgmap v141: 161 pgs: 13 active+undersized, 10 stale+active+clean, 6 active+undersized+degraded, 132 active+clean; 457 KiB data, 231 MiB used, 160 GiB / 160 GiB avail; 895 B/s rd, 0 op/s; 23/627 objects degraded (3.668%) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: cluster 2026-03-10T11:51:01.882659+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Degraded data redundancy: 23/627 objects degraded (3.668%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: cluster 2026-03-10T11:51:01.882659+0000 mon.a (mon.0) 434 : cluster [WRN] Health check failed: Degraded data redundancy: 23/627 objects degraded (3.668%), 6 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: audit 2026-03-10T11:51:01.952510+0000 mon.c (mon.1) 296 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: audit 2026-03-10T11:51:01.952510+0000 mon.c (mon.1) 296 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: audit 2026-03-10T11:51:01.952815+0000 mon.a (mon.0) 435 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:03.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:02 vm05 bash[68966]: audit 2026-03-10T11:51:01.952815+0000 mon.a (mon.0) 435 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: audit 2026-03-10T11:51:02.894348+0000 mon.a (mon.0) 436 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: audit 2026-03-10T11:51:02.894348+0000 mon.a (mon.0) 436 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: cluster 2026-03-10T11:51:02.899256+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e135: 8 total, 7 up, 8 in 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: cluster 2026-03-10T11:51:02.899256+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e135: 8 total, 7 up, 8 in 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: audit 2026-03-10T11:51:02.902646+0000 mon.c (mon.1) 297 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: audit 2026-03-10T11:51:02.902646+0000 mon.c (mon.1) 297 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: audit 2026-03-10T11:51:02.912277+0000 mon.a (mon.0) 438 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:03 vm07 bash[46158]: audit 2026-03-10T11:51:02.912277+0000 mon.a (mon.0) 438 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:03 vm07 bash[64441]: debug 2026-03-10T11:51:03.954+0000 7f86d3ade640 -1 osd.6 132 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: audit 2026-03-10T11:51:02.894348+0000 mon.a (mon.0) 436 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: audit 2026-03-10T11:51:02.894348+0000 mon.a (mon.0) 436 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: cluster 2026-03-10T11:51:02.899256+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e135: 8 total, 7 up, 8 in 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: cluster 2026-03-10T11:51:02.899256+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e135: 8 total, 7 up, 8 in 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: audit 2026-03-10T11:51:02.902646+0000 mon.c (mon.1) 297 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: audit 2026-03-10T11:51:02.902646+0000 mon.c (mon.1) 297 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: audit 2026-03-10T11:51:02.912277+0000 mon.a (mon.0) 438 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:03 vm05 bash[65415]: audit 2026-03-10T11:51:02.912277+0000 mon.a (mon.0) 438 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: audit 2026-03-10T11:51:02.894348+0000 mon.a (mon.0) 436 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: audit 2026-03-10T11:51:02.894348+0000 mon.a (mon.0) 436 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: cluster 2026-03-10T11:51:02.899256+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e135: 8 total, 7 up, 8 in 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: cluster 2026-03-10T11:51:02.899256+0000 mon.a (mon.0) 437 : cluster [DBG] osdmap e135: 8 total, 7 up, 8 in 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: audit 2026-03-10T11:51:02.902646+0000 mon.c (mon.1) 297 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: audit 2026-03-10T11:51:02.902646+0000 mon.c (mon.1) 297 : audit [INF] from='osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604]' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: audit 2026-03-10T11:51:02.912277+0000 mon.a (mon.0) 438 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:04.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:03 vm05 bash[68966]: audit 2026-03-10T11:51:02.912277+0000 mon.a (mon.0) 438 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:04 vm05 bash[65415]: cluster 2026-03-10T11:51:03.463267+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v143: 161 pgs: 35 active+undersized, 18 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s; 74/627 objects degraded (11.802%) 2026-03-10T11:51:05.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:04 vm05 bash[65415]: cluster 2026-03-10T11:51:03.463267+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v143: 161 pgs: 35 active+undersized, 18 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s; 74/627 objects degraded (11.802%) 2026-03-10T11:51:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:04 vm05 bash[68966]: cluster 2026-03-10T11:51:03.463267+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v143: 161 pgs: 35 active+undersized, 18 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s; 74/627 objects degraded (11.802%) 2026-03-10T11:51:05.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:04 vm05 bash[68966]: cluster 2026-03-10T11:51:03.463267+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v143: 161 pgs: 35 active+undersized, 18 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s; 74/627 objects degraded (11.802%) 2026-03-10T11:51:05.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:04 vm07 bash[46158]: cluster 2026-03-10T11:51:03.463267+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v143: 161 pgs: 35 active+undersized, 18 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s; 74/627 objects degraded (11.802%) 2026-03-10T11:51:05.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:04 vm07 bash[46158]: cluster 2026-03-10T11:51:03.463267+0000 mgr.y (mgr.44107) 279 : cluster [DBG] pgmap v143: 161 pgs: 35 active+undersized, 18 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 170 B/s rd, 0 op/s; 74/627 objects degraded (11.802%) 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:03.941874+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 23952.779299 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:03.941874+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 23952.779299 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:04.923269+0000 mon.a (mon.0) 439 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:04.923269+0000 mon.a (mon.0) 439 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:04.931266+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604] boot 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:04.931266+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604] boot 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:04.932090+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:04.932090+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:04.949874+0000 mon.c (mon.1) 298 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:04.949874+0000 mon.c (mon.1) 298 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.262794+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.262794+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.270003+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.270003+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.486955+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.486955+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.866106+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.866106+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.872598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: audit 2026-03-10T11:51:05.872598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:05.933088+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T11:51:06.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:05 vm05 bash[65415]: cluster 2026-03-10T11:51:05.933088+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:03.941874+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 23952.779299 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:03.941874+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 23952.779299 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:04.923269+0000 mon.a (mon.0) 439 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:04.923269+0000 mon.a (mon.0) 439 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:04.931266+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604] boot 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:04.931266+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604] boot 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:04.932090+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:04.932090+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:04.949874+0000 mon.c (mon.1) 298 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:04.949874+0000 mon.c (mon.1) 298 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.262794+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.262794+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.270003+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.270003+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.486955+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.486955+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.866106+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.866106+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.872598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: audit 2026-03-10T11:51:05.872598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:05.933088+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T11:51:06.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:05 vm05 bash[68966]: cluster 2026-03-10T11:51:05.933088+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T11:51:06.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:03.941874+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 23952.779299 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:06.415 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:03.941874+0000 osd.6 (osd.6) 1 : cluster [WRN] OSD bench result of 23952.779299 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.6. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:04.923269+0000 mon.a (mon.0) 439 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:04.923269+0000 mon.a (mon.0) 439 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:04.931266+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604] boot 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:04.931266+0000 mon.a (mon.0) 440 : cluster [INF] osd.6 [v2:192.168.123.107:6816/2107035604,v1:192.168.123.107:6817/2107035604] boot 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:04.932090+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:04.932090+0000 mon.a (mon.0) 441 : cluster [DBG] osdmap e136: 8 total, 8 up, 8 in 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:04.949874+0000 mon.c (mon.1) 298 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:04.949874+0000 mon.c (mon.1) 298 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.262794+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.262794+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.270003+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.270003+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.486955+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.486955+0000 mon.c (mon.1) 299 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.866106+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.866106+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.872598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: audit 2026-03-10T11:51:05.872598+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:05.933088+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T11:51:06.416 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:05 vm07 bash[46158]: cluster 2026-03-10T11:51:05.933088+0000 mon.a (mon.0) 446 : cluster [DBG] osdmap e137: 8 total, 8 up, 8 in 2026-03-10T11:51:07.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:06 vm05 bash[65415]: cluster 2026-03-10T11:51:05.463625+0000 mgr.y (mgr.44107) 280 : cluster [DBG] pgmap v145: 161 pgs: 3 peering, 33 active+undersized, 17 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 930 B/s rd, 0 op/s; 73/627 objects degraded (11.643%) 2026-03-10T11:51:07.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:06 vm05 bash[65415]: cluster 2026-03-10T11:51:05.463625+0000 mgr.y (mgr.44107) 280 : cluster [DBG] pgmap v145: 161 pgs: 3 peering, 33 active+undersized, 17 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 930 B/s rd, 0 op/s; 73/627 objects degraded (11.643%) 2026-03-10T11:51:07.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:06 vm05 bash[68966]: cluster 2026-03-10T11:51:05.463625+0000 mgr.y (mgr.44107) 280 : cluster [DBG] pgmap v145: 161 pgs: 3 peering, 33 active+undersized, 17 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 930 B/s rd, 0 op/s; 73/627 objects degraded (11.643%) 2026-03-10T11:51:07.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:06 vm05 bash[68966]: cluster 2026-03-10T11:51:05.463625+0000 mgr.y (mgr.44107) 280 : cluster [DBG] pgmap v145: 161 pgs: 3 peering, 33 active+undersized, 17 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 930 B/s rd, 0 op/s; 73/627 objects degraded (11.643%) 2026-03-10T11:51:07.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:06 vm07 bash[46158]: cluster 2026-03-10T11:51:05.463625+0000 mgr.y (mgr.44107) 280 : cluster [DBG] pgmap v145: 161 pgs: 3 peering, 33 active+undersized, 17 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 930 B/s rd, 0 op/s; 73/627 objects degraded (11.643%) 2026-03-10T11:51:07.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:06 vm07 bash[46158]: cluster 2026-03-10T11:51:05.463625+0000 mgr.y (mgr.44107) 280 : cluster [DBG] pgmap v145: 161 pgs: 3 peering, 33 active+undersized, 17 active+undersized+degraded, 108 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 930 B/s rd, 0 op/s; 73/627 objects degraded (11.643%) 2026-03-10T11:51:08.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:08 vm05 bash[65415]: cluster 2026-03-10T11:51:07.955982+0000 mon.a (mon.0) 447 : cluster [WRN] Health check update: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:08.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:08 vm05 bash[65415]: cluster 2026-03-10T11:51:07.955982+0000 mon.a (mon.0) 447 : cluster [WRN] Health check update: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:08.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:08 vm05 bash[68966]: cluster 2026-03-10T11:51:07.955982+0000 mon.a (mon.0) 447 : cluster [WRN] Health check update: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:08.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:08 vm05 bash[68966]: cluster 2026-03-10T11:51:07.955982+0000 mon.a (mon.0) 447 : cluster [WRN] Health check update: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:08.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:08 vm07 bash[46158]: cluster 2026-03-10T11:51:07.955982+0000 mon.a (mon.0) 447 : cluster [WRN] Health check update: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:08.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:08 vm07 bash[46158]: cluster 2026-03-10T11:51:07.955982+0000 mon.a (mon.0) 447 : cluster [WRN] Health check update: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:09.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:09 vm05 bash[65415]: cluster 2026-03-10T11:51:07.464188+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v147: 161 pgs: 4 activating, 12 peering, 9 active+undersized, 4 active+undersized+degraded, 132 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 13/627 objects degraded (2.073%) 2026-03-10T11:51:09.162 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:09 vm05 bash[65415]: cluster 2026-03-10T11:51:07.464188+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v147: 161 pgs: 4 activating, 12 peering, 9 active+undersized, 4 active+undersized+degraded, 132 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 13/627 objects degraded (2.073%) 2026-03-10T11:51:09.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:09 vm05 bash[68966]: cluster 2026-03-10T11:51:07.464188+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v147: 161 pgs: 4 activating, 12 peering, 9 active+undersized, 4 active+undersized+degraded, 132 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 13/627 objects degraded (2.073%) 2026-03-10T11:51:09.162 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:09 vm05 bash[68966]: cluster 2026-03-10T11:51:07.464188+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v147: 161 pgs: 4 activating, 12 peering, 9 active+undersized, 4 active+undersized+degraded, 132 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 13/627 objects degraded (2.073%) 2026-03-10T11:51:09.162 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:51:08] "GET /metrics HTTP/1.1" 200 37663 "" "Prometheus/2.51.0" 2026-03-10T11:51:09.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:09 vm07 bash[46158]: cluster 2026-03-10T11:51:07.464188+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v147: 161 pgs: 4 activating, 12 peering, 9 active+undersized, 4 active+undersized+degraded, 132 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 13/627 objects degraded (2.073%) 2026-03-10T11:51:09.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:09 vm07 bash[46158]: cluster 2026-03-10T11:51:07.464188+0000 mgr.y (mgr.44107) 281 : cluster [DBG] pgmap v147: 161 pgs: 4 activating, 12 peering, 9 active+undersized, 4 active+undersized+degraded, 132 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s; 13/627 objects degraded (2.073%) 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:10 vm05 bash[65415]: cluster 2026-03-10T11:51:10.011410+0000 mon.a (mon.0) 448 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded) 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:10 vm05 bash[65415]: cluster 2026-03-10T11:51:10.011410+0000 mon.a (mon.0) 448 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded) 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:10 vm05 bash[65415]: cluster 2026-03-10T11:51:10.011431+0000 mon.a (mon.0) 449 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:10 vm05 bash[65415]: cluster 2026-03-10T11:51:10.011431+0000 mon.a (mon.0) 449 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:10 vm05 bash[68966]: cluster 2026-03-10T11:51:10.011410+0000 mon.a (mon.0) 448 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded) 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:10 vm05 bash[68966]: cluster 2026-03-10T11:51:10.011410+0000 mon.a (mon.0) 448 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded) 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:10 vm05 bash[68966]: cluster 2026-03-10T11:51:10.011431+0000 mon.a (mon.0) 449 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:10.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:10 vm05 bash[68966]: cluster 2026-03-10T11:51:10.011431+0000 mon.a (mon.0) 449 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:10 vm07 bash[46158]: cluster 2026-03-10T11:51:10.011410+0000 mon.a (mon.0) 448 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded) 2026-03-10T11:51:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:10 vm07 bash[46158]: cluster 2026-03-10T11:51:10.011410+0000 mon.a (mon.0) 448 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 13/627 objects degraded (2.073%), 4 pgs degraded) 2026-03-10T11:51:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:10 vm07 bash[46158]: cluster 2026-03-10T11:51:10.011431+0000 mon.a (mon.0) 449 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:10.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:10 vm07 bash[46158]: cluster 2026-03-10T11:51:10.011431+0000 mon.a (mon.0) 449 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:11.253 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:11 vm05 bash[65415]: audit 2026-03-10T11:51:09.167238+0000 mgr.y (mgr.44107) 282 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:11 vm05 bash[65415]: audit 2026-03-10T11:51:09.167238+0000 mgr.y (mgr.44107) 282 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:11 vm05 bash[65415]: cluster 2026-03-10T11:51:09.464487+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v148: 161 pgs: 4 activating, 12 peering, 145 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 779 B/s rd, 0 op/s 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:11 vm05 bash[65415]: cluster 2026-03-10T11:51:09.464487+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v148: 161 pgs: 4 activating, 12 peering, 145 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 779 B/s rd, 0 op/s 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:11 vm05 bash[68966]: audit 2026-03-10T11:51:09.167238+0000 mgr.y (mgr.44107) 282 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:11 vm05 bash[68966]: audit 2026-03-10T11:51:09.167238+0000 mgr.y (mgr.44107) 282 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:11 vm05 bash[68966]: cluster 2026-03-10T11:51:09.464487+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v148: 161 pgs: 4 activating, 12 peering, 145 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 779 B/s rd, 0 op/s 2026-03-10T11:51:11.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:11 vm05 bash[68966]: cluster 2026-03-10T11:51:09.464487+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v148: 161 pgs: 4 activating, 12 peering, 145 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 779 B/s rd, 0 op/s 2026-03-10T11:51:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:11 vm07 bash[46158]: audit 2026-03-10T11:51:09.167238+0000 mgr.y (mgr.44107) 282 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:11 vm07 bash[46158]: audit 2026-03-10T11:51:09.167238+0000 mgr.y (mgr.44107) 282 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:11 vm07 bash[46158]: cluster 2026-03-10T11:51:09.464487+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v148: 161 pgs: 4 activating, 12 peering, 145 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 779 B/s rd, 0 op/s 2026-03-10T11:51:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:11 vm07 bash[46158]: cluster 2026-03-10T11:51:09.464487+0000 mgr.y (mgr.44107) 283 : cluster [DBG] pgmap v148: 161 pgs: 4 activating, 12 peering, 145 active+clean; 457 KiB data, 249 MiB used, 160 GiB / 160 GiB avail; 779 B/s rd, 0 op/s 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (17m) 52s ago 24m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (5m) 6s ago 24m 67.2M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (5m) 52s ago 24m 44.2M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (5m) 6s ago 27m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (14m) 52s ago 28m 532M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (3m) 52s ago 28m 49.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (4m) 6s ago 27m 49.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (3m) 52s ago 27m 45.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (17m) 52s ago 24m 8024k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (17m) 6s ago 24m 8047k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (94s) 52s ago 27m 46.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (57s) 52s ago 26m 22.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (2m) 52s ago 26m 46.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:51:11.641 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (2m) 52s ago 26m 68.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (42s) 6s ago 26m 48.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (26s) 6s ago 25m 45.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (11s) 6s ago 25m 22.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (25m) 6s ago 25m 60.3M 4096M 17.2.0 e1d6a67b021e c542edbe96b5 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (5m) 6s ago 24m 45.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (24m) 52s ago 24m 89.4M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:51:11.642 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (24m) 6s ago 24m 90.2M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 1, 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 7 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 3, 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 12 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:51:11.878 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc", 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons of type(s) crash,osd", 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "7/8 daemons upgraded", 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Currently upgrading osd daemons", 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:51:12.079 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:51:12.583 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.248490+0000 mgr.y (mgr.44107) 284 : audit [DBG] from='client.44401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.248490+0000 mgr.y (mgr.44107) 284 : audit [DBG] from='client.44401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.441810+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.441810+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cluster 2026-03-10T11:51:11.464924+0000 mgr.y (mgr.44107) 286 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cluster 2026-03-10T11:51:11.464924+0000 mgr.y (mgr.44107) 286 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.509222+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.509222+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.514579+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.514579+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.517117+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.517117+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.518454+0000 mon.c (mon.1) 301 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.518454+0000 mon.c (mon.1) 301 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.525474+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.525474+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.565053+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.565053+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.567224+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.567224+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.568469+0000 mon.c (mon.1) 304 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.568469+0000 mon.c (mon.1) 304 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.569542+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.569542+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.570751+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.570751+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.571245+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.571245+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cephadm 2026-03-10T11:51:11.571867+0000 mgr.y (mgr.44107) 288 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cephadm 2026-03-10T11:51:11.571867+0000 mgr.y (mgr.44107) 288 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.642469+0000 mgr.y (mgr.44107) 289 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.642469+0000 mgr.y (mgr.44107) 289 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.883083+0000 mon.a (mon.0) 453 : audit [DBG] from='client.? 192.168.123.105:0/3797714915' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.883083+0000 mon.a (mon.0) 453 : audit [DBG] from='client.? 192.168.123.105:0/3797714915' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cephadm 2026-03-10T11:51:11.980731+0000 mgr.y (mgr.44107) 290 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cephadm 2026-03-10T11:51:11.980731+0000 mgr.y (mgr.44107) 290 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.985257+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.985257+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.988131+0000 mon.c (mon.1) 307 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.988131+0000 mon.c (mon.1) 307 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.989076+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:11.989076+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cephadm 2026-03-10T11:51:11.991243+0000 mgr.y (mgr.44107) 291 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: cephadm 2026-03-10T11:51:11.991243+0000 mgr.y (mgr.44107) 291 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:12.083726+0000 mgr.y (mgr.44107) 292 : audit [DBG] from='client.54465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.584 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 bash[46158]: audit 2026-03-10T11:51:12.083726+0000 mgr.y (mgr.44107) 292 : audit [DBG] from='client.54465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.248490+0000 mgr.y (mgr.44107) 284 : audit [DBG] from='client.44401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.248490+0000 mgr.y (mgr.44107) 284 : audit [DBG] from='client.44401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.441810+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.441810+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cluster 2026-03-10T11:51:11.464924+0000 mgr.y (mgr.44107) 286 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cluster 2026-03-10T11:51:11.464924+0000 mgr.y (mgr.44107) 286 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.509222+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.509222+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.514579+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.514579+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.517117+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.517117+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.518454+0000 mon.c (mon.1) 301 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.518454+0000 mon.c (mon.1) 301 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.525474+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.525474+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.565053+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.565053+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.567224+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.567224+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.568469+0000 mon.c (mon.1) 304 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.568469+0000 mon.c (mon.1) 304 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.569542+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.569542+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.570751+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.570751+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.571245+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.571245+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cephadm 2026-03-10T11:51:11.571867+0000 mgr.y (mgr.44107) 288 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cephadm 2026-03-10T11:51:11.571867+0000 mgr.y (mgr.44107) 288 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.642469+0000 mgr.y (mgr.44107) 289 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.642469+0000 mgr.y (mgr.44107) 289 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.883083+0000 mon.a (mon.0) 453 : audit [DBG] from='client.? 192.168.123.105:0/3797714915' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.883083+0000 mon.a (mon.0) 453 : audit [DBG] from='client.? 192.168.123.105:0/3797714915' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cephadm 2026-03-10T11:51:11.980731+0000 mgr.y (mgr.44107) 290 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cephadm 2026-03-10T11:51:11.980731+0000 mgr.y (mgr.44107) 290 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.985257+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.985257+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.988131+0000 mon.c (mon.1) 307 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:51:12.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.988131+0000 mon.c (mon.1) 307 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.989076+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:11.989076+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cephadm 2026-03-10T11:51:11.991243+0000 mgr.y (mgr.44107) 291 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: cephadm 2026-03-10T11:51:11.991243+0000 mgr.y (mgr.44107) 291 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:12.083726+0000 mgr.y (mgr.44107) 292 : audit [DBG] from='client.54465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:12 vm05 bash[65415]: audit 2026-03-10T11:51:12.083726+0000 mgr.y (mgr.44107) 292 : audit [DBG] from='client.54465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.248490+0000 mgr.y (mgr.44107) 284 : audit [DBG] from='client.44401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.248490+0000 mgr.y (mgr.44107) 284 : audit [DBG] from='client.44401 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.441810+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.441810+0000 mgr.y (mgr.44107) 285 : audit [DBG] from='client.54450 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cluster 2026-03-10T11:51:11.464924+0000 mgr.y (mgr.44107) 286 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cluster 2026-03-10T11:51:11.464924+0000 mgr.y (mgr.44107) 286 : cluster [DBG] pgmap v149: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.509222+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.509222+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.514579+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.514579+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.517117+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.517117+0000 mon.c (mon.1) 300 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.518454+0000 mon.c (mon.1) 301 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.518454+0000 mon.c (mon.1) 301 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.525474+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.525474+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.565053+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.565053+0000 mon.c (mon.1) 302 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.567224+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.567224+0000 mon.c (mon.1) 303 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.568469+0000 mon.c (mon.1) 304 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.568469+0000 mon.c (mon.1) 304 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.569542+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.569542+0000 mon.c (mon.1) 305 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.570751+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.570751+0000 mon.c (mon.1) 306 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.571245+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.571245+0000 mgr.y (mgr.44107) 287 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "osd ok-to-stop", "ids": ["7"], "max": 16}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cephadm 2026-03-10T11:51:11.571867+0000 mgr.y (mgr.44107) 288 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cephadm 2026-03-10T11:51:11.571867+0000 mgr.y (mgr.44107) 288 : cephadm [INF] Upgrade: osd.7 is safe to restart 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.642469+0000 mgr.y (mgr.44107) 289 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.642469+0000 mgr.y (mgr.44107) 289 : audit [DBG] from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.883083+0000 mon.a (mon.0) 453 : audit [DBG] from='client.? 192.168.123.105:0/3797714915' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.883083+0000 mon.a (mon.0) 453 : audit [DBG] from='client.? 192.168.123.105:0/3797714915' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cephadm 2026-03-10T11:51:11.980731+0000 mgr.y (mgr.44107) 290 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cephadm 2026-03-10T11:51:11.980731+0000 mgr.y (mgr.44107) 290 : cephadm [INF] Upgrade: Updating osd.7 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.985257+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.985257+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.988131+0000 mon.c (mon.1) 307 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.988131+0000 mon.c (mon.1) 307 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.989076+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:11.989076+0000 mon.c (mon.1) 308 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cephadm 2026-03-10T11:51:11.991243+0000 mgr.y (mgr.44107) 291 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: cephadm 2026-03-10T11:51:11.991243+0000 mgr.y (mgr.44107) 291 : cephadm [INF] Deploying daemon osd.7 on vm07 2026-03-10T11:51:12.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:12.083726+0000 mgr.y (mgr.44107) 292 : audit [DBG] from='client.54465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.842 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:12 vm05 bash[68966]: audit 2026-03-10T11:51:12.083726+0000 mgr.y (mgr.44107) 292 : audit [DBG] from='client.54465 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:12.946 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: Stopping Ceph osd.7 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:12 vm07 bash[30341]: debug 2026-03-10T11:51:12.814+0000 7f15f1181700 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:51:12.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:12 vm07 bash[30341]: debug 2026-03-10T11:51:12.814+0000 7f15f1181700 -1 osd.7 137 *** Got signal Terminated *** 2026-03-10T11:51:12.947 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:12 vm07 bash[30341]: debug 2026-03-10T11:51:12.814+0000 7f15f1181700 -1 osd.7 137 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:51:12.947 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.947 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.947 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:12.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:12 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:13.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:13 vm05 bash[65415]: cluster 2026-03-10T11:51:12.824865+0000 mon.a (mon.0) 455 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T11:51:13.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:13 vm05 bash[65415]: cluster 2026-03-10T11:51:12.824865+0000 mon.a (mon.0) 455 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T11:51:13.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:13 vm05 bash[68966]: cluster 2026-03-10T11:51:12.824865+0000 mon.a (mon.0) 455 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T11:51:13.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:13 vm05 bash[68966]: cluster 2026-03-10T11:51:12.824865+0000 mon.a (mon.0) 455 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T11:51:13.852 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:13 vm07 bash[46158]: cluster 2026-03-10T11:51:12.824865+0000 mon.a (mon.0) 455 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T11:51:13.852 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:13 vm07 bash[46158]: cluster 2026-03-10T11:51:12.824865+0000 mon.a (mon.0) 455 : cluster [INF] osd.7 marked itself down and dead 2026-03-10T11:51:13.852 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:13 vm07 bash[68708]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-7 2026-03-10T11:51:14.102 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.102 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.102 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:13 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.7.service: Deactivated successfully. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:13 vm07 systemd[1]: Stopped Ceph osd.7 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.103 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:14.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:14 vm07 systemd[1]: Started Ceph osd.7 for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:51:14.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:14 vm07 bash[68911]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:51:14.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:14 vm07 bash[68911]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.465366+0000 mgr.y (mgr.44107) 293 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.465366+0000 mgr.y (mgr.44107) 293 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.512538+0000 mon.a (mon.0) 456 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.512538+0000 mon.a (mon.0) 456 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.512553+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.512553+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.527249+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: cluster 2026-03-10T11:51:13.527249+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: audit 2026-03-10T11:51:14.138413+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: audit 2026-03-10T11:51:14.138413+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: audit 2026-03-10T11:51:14.146412+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: audit 2026-03-10T11:51:14.146412+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: audit 2026-03-10T11:51:14.149626+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:14 vm05 bash[65415]: audit 2026-03-10T11:51:14.149626+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.465366+0000 mgr.y (mgr.44107) 293 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.465366+0000 mgr.y (mgr.44107) 293 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.512538+0000 mon.a (mon.0) 456 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.512538+0000 mon.a (mon.0) 456 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.512553+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.512553+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.527249+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: cluster 2026-03-10T11:51:13.527249+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: audit 2026-03-10T11:51:14.138413+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: audit 2026-03-10T11:51:14.138413+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: audit 2026-03-10T11:51:14.146412+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: audit 2026-03-10T11:51:14.146412+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: audit 2026-03-10T11:51:14.149626+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:14.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:14 vm05 bash[68966]: audit 2026-03-10T11:51:14.149626+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.465366+0000 mgr.y (mgr.44107) 293 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.465366+0000 mgr.y (mgr.44107) 293 : cluster [DBG] pgmap v150: 161 pgs: 161 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.512538+0000 mon.a (mon.0) 456 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.512538+0000 mon.a (mon.0) 456 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN) 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.512553+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.512553+0000 mon.a (mon.0) 457 : cluster [WRN] Health check failed: all OSDs are running squid or later but require_osd_release < squid (OSD_UPGRADE_FINISHED) 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.527249+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: cluster 2026-03-10T11:51:13.527249+0000 mon.a (mon.0) 458 : cluster [DBG] osdmap e138: 8 total, 7 up, 8 in 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: audit 2026-03-10T11:51:14.138413+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: audit 2026-03-10T11:51:14.138413+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: audit 2026-03-10T11:51:14.146412+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: audit 2026-03-10T11:51:14.146412+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: audit 2026-03-10T11:51:14.149626+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:14.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:14 vm07 bash[46158]: audit 2026-03-10T11:51:14.149626+0000 mon.c (mon.1) 309 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:15.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: --> Failed to activate via raw: did not find any matching OSD to activate 2026-03-10T11:51:15.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:51:15.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/ceph-authtool --gen-print-key 2026-03-10T11:51:15.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-10T11:51:15.446 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-20554b09-8743-4d16-9a2e-acdd584dbc35/osd-block-d3a17b00-d9f4-4951-b587-40f724c9827b --path /var/lib/ceph/osd/ceph-7 --no-mon-config 2026-03-10T11:51:15.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:15 vm05 bash[65415]: cluster 2026-03-10T11:51:14.552945+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e139: 8 total, 7 up, 8 in 2026-03-10T11:51:15.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:15 vm05 bash[65415]: cluster 2026-03-10T11:51:14.552945+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e139: 8 total, 7 up, 8 in 2026-03-10T11:51:15.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:15 vm05 bash[68966]: cluster 2026-03-10T11:51:14.552945+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e139: 8 total, 7 up, 8 in 2026-03-10T11:51:15.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:15 vm05 bash[68966]: cluster 2026-03-10T11:51:14.552945+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e139: 8 total, 7 up, 8 in 2026-03-10T11:51:15.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:15 vm07 bash[46158]: cluster 2026-03-10T11:51:14.552945+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e139: 8 total, 7 up, 8 in 2026-03-10T11:51:15.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:15 vm07 bash[46158]: cluster 2026-03-10T11:51:14.552945+0000 mon.a (mon.0) 461 : cluster [DBG] osdmap e139: 8 total, 7 up, 8 in 2026-03-10T11:51:15.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/ln -snf /dev/ceph-20554b09-8743-4d16-9a2e-acdd584dbc35/osd-block-d3a17b00-d9f4-4951-b587-40f724c9827b /var/lib/ceph/osd/ceph-7/block 2026-03-10T11:51:15.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block 2026-03-10T11:51:15.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3 2026-03-10T11:51:15.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 2026-03-10T11:51:15.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[68911]: --> ceph-volume lvm activate successful for osd ID: 7 2026-03-10T11:51:15.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:15 vm07 bash[69258]: debug 2026-03-10T11:51:15.710+0000 7f5cc4b53640 1 -- 192.168.123.107:0/146256089 <== mon.0 v2:192.168.123.105:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55b7ad521680 con 0x55b7ac72fc00 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:16 vm05 bash[65415]: cluster 2026-03-10T11:51:15.465753+0000 mgr.y (mgr.44107) 294 : cluster [DBG] pgmap v153: 161 pgs: 7 active+undersized, 18 stale+active+clean, 5 active+undersized+degraded, 131 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 10/627 objects degraded (1.595%) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:16 vm05 bash[65415]: cluster 2026-03-10T11:51:15.465753+0000 mgr.y (mgr.44107) 294 : cluster [DBG] pgmap v153: 161 pgs: 7 active+undersized, 18 stale+active+clean, 5 active+undersized+degraded, 131 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 10/627 objects degraded (1.595%) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:16 vm05 bash[65415]: cluster 2026-03-10T11:51:15.526342+0000 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:16 vm05 bash[65415]: cluster 2026-03-10T11:51:15.526342+0000 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:16 vm05 bash[68966]: cluster 2026-03-10T11:51:15.465753+0000 mgr.y (mgr.44107) 294 : cluster [DBG] pgmap v153: 161 pgs: 7 active+undersized, 18 stale+active+clean, 5 active+undersized+degraded, 131 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 10/627 objects degraded (1.595%) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:16 vm05 bash[68966]: cluster 2026-03-10T11:51:15.465753+0000 mgr.y (mgr.44107) 294 : cluster [DBG] pgmap v153: 161 pgs: 7 active+undersized, 18 stale+active+clean, 5 active+undersized+degraded, 131 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 10/627 objects degraded (1.595%) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:16 vm05 bash[68966]: cluster 2026-03-10T11:51:15.526342+0000 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:16.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:16 vm05 bash[68966]: cluster 2026-03-10T11:51:15.526342+0000 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:16.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:16 vm07 bash[69258]: debug 2026-03-10T11:51:16.658+0000 7f5cc73bd740 -1 Falling back to public interface 2026-03-10T11:51:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:16 vm07 bash[46158]: cluster 2026-03-10T11:51:15.465753+0000 mgr.y (mgr.44107) 294 : cluster [DBG] pgmap v153: 161 pgs: 7 active+undersized, 18 stale+active+clean, 5 active+undersized+degraded, 131 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 10/627 objects degraded (1.595%) 2026-03-10T11:51:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:16 vm07 bash[46158]: cluster 2026-03-10T11:51:15.465753+0000 mgr.y (mgr.44107) 294 : cluster [DBG] pgmap v153: 161 pgs: 7 active+undersized, 18 stale+active+clean, 5 active+undersized+degraded, 131 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 10/627 objects degraded (1.595%) 2026-03-10T11:51:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:16 vm07 bash[46158]: cluster 2026-03-10T11:51:15.526342+0000 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:16.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:16 vm07 bash[46158]: cluster 2026-03-10T11:51:15.526342+0000 mon.a (mon.0) 462 : cluster [WRN] Health check failed: Degraded data redundancy: 10/627 objects degraded (1.595%), 5 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:17.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:17 vm07 bash[69258]: debug 2026-03-10T11:51:17.646+0000 7f5cc73bd740 -1 osd.7 0 read_superblock omap replica is missing. 2026-03-10T11:51:17.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:17 vm07 bash[69258]: debug 2026-03-10T11:51:17.670+0000 7f5cc73bd740 -1 osd.7 137 log_to_monitors true 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:18 vm05 bash[65415]: cluster 2026-03-10T11:51:17.466228+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v154: 161 pgs: 31 active+undersized, 3 stale+active+clean, 23 active+undersized+degraded, 104 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:18 vm05 bash[65415]: cluster 2026-03-10T11:51:17.466228+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v154: 161 pgs: 31 active+undersized, 3 stale+active+clean, 23 active+undersized+degraded, 104 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:18 vm05 bash[65415]: audit 2026-03-10T11:51:17.679350+0000 mon.b (mon.2) 34 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:18 vm05 bash[65415]: audit 2026-03-10T11:51:17.679350+0000 mon.b (mon.2) 34 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:18 vm05 bash[65415]: audit 2026-03-10T11:51:17.683702+0000 mon.a (mon.0) 463 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:18 vm05 bash[65415]: audit 2026-03-10T11:51:17.683702+0000 mon.a (mon.0) 463 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:18 vm05 bash[68966]: cluster 2026-03-10T11:51:17.466228+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v154: 161 pgs: 31 active+undersized, 3 stale+active+clean, 23 active+undersized+degraded, 104 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:18 vm05 bash[68966]: cluster 2026-03-10T11:51:17.466228+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v154: 161 pgs: 31 active+undersized, 3 stale+active+clean, 23 active+undersized+degraded, 104 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:18 vm05 bash[68966]: audit 2026-03-10T11:51:17.679350+0000 mon.b (mon.2) 34 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:18 vm05 bash[68966]: audit 2026-03-10T11:51:17.679350+0000 mon.b (mon.2) 34 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:18 vm05 bash[68966]: audit 2026-03-10T11:51:17.683702+0000 mon.a (mon.0) 463 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:18 vm05 bash[68966]: audit 2026-03-10T11:51:17.683702+0000 mon.a (mon.0) 463 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:18 vm07 bash[46158]: cluster 2026-03-10T11:51:17.466228+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v154: 161 pgs: 31 active+undersized, 3 stale+active+clean, 23 active+undersized+degraded, 104 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:18 vm07 bash[46158]: cluster 2026-03-10T11:51:17.466228+0000 mgr.y (mgr.44107) 295 : cluster [DBG] pgmap v154: 161 pgs: 31 active+undersized, 3 stale+active+clean, 23 active+undersized+degraded, 104 active+clean; 457 KiB data, 253 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:18 vm07 bash[46158]: audit 2026-03-10T11:51:17.679350+0000 mon.b (mon.2) 34 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:18 vm07 bash[46158]: audit 2026-03-10T11:51:17.679350+0000 mon.b (mon.2) 34 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:18 vm07 bash[46158]: audit 2026-03-10T11:51:17.683702+0000 mon.a (mon.0) 463 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:18.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:18 vm07 bash[46158]: audit 2026-03-10T11:51:17.683702+0000 mon.a (mon.0) 463 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-10T11:51:19.171 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:51:18] "GET /metrics HTTP/1.1" 200 37663 "" "Prometheus/2.51.0" 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: audit 2026-03-10T11:51:18.561860+0000 mon.a (mon.0) 464 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: audit 2026-03-10T11:51:18.561860+0000 mon.a (mon.0) 464 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: audit 2026-03-10T11:51:18.565695+0000 mon.b (mon.2) 35 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: audit 2026-03-10T11:51:18.565695+0000 mon.b (mon.2) 35 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: cluster 2026-03-10T11:51:18.566790+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e140: 8 total, 7 up, 8 in 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: cluster 2026-03-10T11:51:18.566790+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e140: 8 total, 7 up, 8 in 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: audit 2026-03-10T11:51:18.570011+0000 mon.a (mon.0) 466 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:19 vm05 bash[68966]: audit 2026-03-10T11:51:18.570011+0000 mon.a (mon.0) 466 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: audit 2026-03-10T11:51:18.561860+0000 mon.a (mon.0) 464 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: audit 2026-03-10T11:51:18.561860+0000 mon.a (mon.0) 464 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: audit 2026-03-10T11:51:18.565695+0000 mon.b (mon.2) 35 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: audit 2026-03-10T11:51:18.565695+0000 mon.b (mon.2) 35 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: cluster 2026-03-10T11:51:18.566790+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e140: 8 total, 7 up, 8 in 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: cluster 2026-03-10T11:51:18.566790+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e140: 8 total, 7 up, 8 in 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: audit 2026-03-10T11:51:18.570011+0000 mon.a (mon.0) 466 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:19 vm05 bash[65415]: audit 2026-03-10T11:51:18.570011+0000 mon.a (mon.0) 466 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.947 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:19 vm07 bash[69258]: debug 2026-03-10T11:51:19.578+0000 7f5cbe967640 -1 osd.7 137 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: audit 2026-03-10T11:51:18.561860+0000 mon.a (mon.0) 464 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: audit 2026-03-10T11:51:18.561860+0000 mon.a (mon.0) 464 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: audit 2026-03-10T11:51:18.565695+0000 mon.b (mon.2) 35 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: audit 2026-03-10T11:51:18.565695+0000 mon.b (mon.2) 35 : audit [INF] from='osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966]' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: cluster 2026-03-10T11:51:18.566790+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e140: 8 total, 7 up, 8 in 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: cluster 2026-03-10T11:51:18.566790+0000 mon.a (mon.0) 465 : cluster [DBG] osdmap e140: 8 total, 7 up, 8 in 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: audit 2026-03-10T11:51:18.570011+0000 mon.a (mon.0) 466 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:19.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:19 vm07 bash[46158]: audit 2026-03-10T11:51:18.570011+0000 mon.a (mon.0) 466 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm07", "root=default"]}]: dispatch 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:19.176011+0000 mgr.y (mgr.44107) 296 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:19.176011+0000 mgr.y (mgr.44107) 296 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: cluster 2026-03-10T11:51:19.466654+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v156: 161 pgs: 40 active+undersized, 23 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: cluster 2026-03-10T11:51:19.466654+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v156: 161 pgs: 40 active+undersized, 23 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.478994+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.478994+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.484429+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.484429+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.493228+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.493228+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.496807+0000 mon.c (mon.1) 310 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:20.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:20 vm07 bash[46158]: audit 2026-03-10T11:51:20.496807+0000 mon.c (mon.1) 310 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:19.176011+0000 mgr.y (mgr.44107) 296 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:19.176011+0000 mgr.y (mgr.44107) 296 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: cluster 2026-03-10T11:51:19.466654+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v156: 161 pgs: 40 active+undersized, 23 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: cluster 2026-03-10T11:51:19.466654+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v156: 161 pgs: 40 active+undersized, 23 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.478994+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.478994+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.484429+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.484429+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.493228+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.493228+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.496807+0000 mon.c (mon.1) 310 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:20 vm05 bash[68966]: audit 2026-03-10T11:51:20.496807+0000 mon.c (mon.1) 310 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:19.176011+0000 mgr.y (mgr.44107) 296 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:19.176011+0000 mgr.y (mgr.44107) 296 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: cluster 2026-03-10T11:51:19.466654+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v156: 161 pgs: 40 active+undersized, 23 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:21.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: cluster 2026-03-10T11:51:19.466654+0000 mgr.y (mgr.44107) 297 : cluster [DBG] pgmap v156: 161 pgs: 40 active+undersized, 23 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 78/627 objects degraded (12.440%) 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.478994+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.478994+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.484429+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.484429+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.493228+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.493228+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.496807+0000 mon.c (mon.1) 310 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:21.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:20 vm05 bash[65415]: audit 2026-03-10T11:51:20.496807+0000 mon.c (mon.1) 310 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:19.567153+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 29455.415900 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:19.567153+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 29455.415900 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.576082+0000 mon.a (mon.0) 470 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.576082+0000 mon.a (mon.0) 470 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.599144+0000 mon.a (mon.0) 471 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966] boot 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.599144+0000 mon.a (mon.0) 471 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966] boot 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.600180+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.600180+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: audit 2026-03-10T11:51:20.605556+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: audit 2026-03-10T11:51:20.605556+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.675459+0000 mon.a (mon.0) 473 : cluster [WRN] Health check update: Degraded data redundancy: 78/627 objects degraded (12.440%), 23 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:20.675459+0000 mon.a (mon.0) 473 : cluster [WRN] Health check update: Degraded data redundancy: 78/627 objects degraded (12.440%), 23 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: audit 2026-03-10T11:51:21.036445+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: audit 2026-03-10T11:51:21.036445+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: audit 2026-03-10T11:51:21.044016+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: audit 2026-03-10T11:51:21.044016+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:21.586738+0000 mon.a (mon.0) 476 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T11:51:21.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:21 vm07 bash[46158]: cluster 2026-03-10T11:51:21.586738+0000 mon.a (mon.0) 476 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:19.567153+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 29455.415900 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:19.567153+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 29455.415900 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.576082+0000 mon.a (mon.0) 470 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.576082+0000 mon.a (mon.0) 470 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.599144+0000 mon.a (mon.0) 471 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966] boot 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.599144+0000 mon.a (mon.0) 471 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966] boot 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.600180+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.600180+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: audit 2026-03-10T11:51:20.605556+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: audit 2026-03-10T11:51:20.605556+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.675459+0000 mon.a (mon.0) 473 : cluster [WRN] Health check update: Degraded data redundancy: 78/627 objects degraded (12.440%), 23 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:20.675459+0000 mon.a (mon.0) 473 : cluster [WRN] Health check update: Degraded data redundancy: 78/627 objects degraded (12.440%), 23 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: audit 2026-03-10T11:51:21.036445+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: audit 2026-03-10T11:51:21.036445+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: audit 2026-03-10T11:51:21.044016+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: audit 2026-03-10T11:51:21.044016+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:21.586738+0000 mon.a (mon.0) 476 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:21 vm05 bash[68966]: cluster 2026-03-10T11:51:21.586738+0000 mon.a (mon.0) 476 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:19.567153+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 29455.415900 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:19.567153+0000 osd.7 (osd.7) 1 : cluster [WRN] OSD bench result of 29455.415900 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.7. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.576082+0000 mon.a (mon.0) 470 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.576082+0000 mon.a (mon.0) 470 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down) 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.599144+0000 mon.a (mon.0) 471 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966] boot 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.599144+0000 mon.a (mon.0) 471 : cluster [INF] osd.7 [v2:192.168.123.107:6824/3068670966,v1:192.168.123.107:6825/3068670966] boot 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.600180+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.600180+0000 mon.a (mon.0) 472 : cluster [DBG] osdmap e141: 8 total, 8 up, 8 in 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: audit 2026-03-10T11:51:20.605556+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: audit 2026-03-10T11:51:20.605556+0000 mon.c (mon.1) 311 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.675459+0000 mon.a (mon.0) 473 : cluster [WRN] Health check update: Degraded data redundancy: 78/627 objects degraded (12.440%), 23 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:20.675459+0000 mon.a (mon.0) 473 : cluster [WRN] Health check update: Degraded data redundancy: 78/627 objects degraded (12.440%), 23 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: audit 2026-03-10T11:51:21.036445+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: audit 2026-03-10T11:51:21.036445+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: audit 2026-03-10T11:51:21.044016+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: audit 2026-03-10T11:51:21.044016+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:21.586738+0000 mon.a (mon.0) 476 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T11:51:22.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:21 vm05 bash[65415]: cluster 2026-03-10T11:51:21.586738+0000 mon.a (mon.0) 476 : cluster [DBG] osdmap e142: 8 total, 8 up, 8 in 2026-03-10T11:51:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:23 vm07 bash[46158]: cluster 2026-03-10T11:51:21.467173+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v158: 161 pgs: 15 peering, 31 active+undersized, 17 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-10T11:51:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:23 vm07 bash[46158]: cluster 2026-03-10T11:51:21.467173+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v158: 161 pgs: 15 peering, 31 active+undersized, 17 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-10T11:51:23.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:23 vm05 bash[68966]: cluster 2026-03-10T11:51:21.467173+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v158: 161 pgs: 15 peering, 31 active+undersized, 17 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-10T11:51:23.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:23 vm05 bash[68966]: cluster 2026-03-10T11:51:21.467173+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v158: 161 pgs: 15 peering, 31 active+undersized, 17 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-10T11:51:23.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:23 vm05 bash[65415]: cluster 2026-03-10T11:51:21.467173+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v158: 161 pgs: 15 peering, 31 active+undersized, 17 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-10T11:51:23.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:23 vm05 bash[65415]: cluster 2026-03-10T11:51:21.467173+0000 mgr.y (mgr.44107) 298 : cluster [DBG] pgmap v158: 161 pgs: 15 peering, 31 active+undersized, 17 active+undersized+degraded, 98 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 61/627 objects degraded (9.729%) 2026-03-10T11:51:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:25 vm07 bash[46158]: cluster 2026-03-10T11:51:23.467667+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v160: 161 pgs: 15 peering, 19 active+undersized, 11 active+undersized+degraded, 116 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 44/627 objects degraded (7.018%) 2026-03-10T11:51:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:25 vm07 bash[46158]: cluster 2026-03-10T11:51:23.467667+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v160: 161 pgs: 15 peering, 19 active+undersized, 11 active+undersized+degraded, 116 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 44/627 objects degraded (7.018%) 2026-03-10T11:51:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:25 vm05 bash[68966]: cluster 2026-03-10T11:51:23.467667+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v160: 161 pgs: 15 peering, 19 active+undersized, 11 active+undersized+degraded, 116 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 44/627 objects degraded (7.018%) 2026-03-10T11:51:25.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:25 vm05 bash[68966]: cluster 2026-03-10T11:51:23.467667+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v160: 161 pgs: 15 peering, 19 active+undersized, 11 active+undersized+degraded, 116 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 44/627 objects degraded (7.018%) 2026-03-10T11:51:25.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:25 vm05 bash[65415]: cluster 2026-03-10T11:51:23.467667+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v160: 161 pgs: 15 peering, 19 active+undersized, 11 active+undersized+degraded, 116 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 44/627 objects degraded (7.018%) 2026-03-10T11:51:25.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:25 vm05 bash[65415]: cluster 2026-03-10T11:51:23.467667+0000 mgr.y (mgr.44107) 299 : cluster [DBG] pgmap v160: 161 pgs: 15 peering, 19 active+undersized, 11 active+undersized+degraded, 116 active+clean; 457 KiB data, 271 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s; 44/627 objects degraded (7.018%) 2026-03-10T11:51:26.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:26 vm07 bash[46158]: cluster 2026-03-10T11:51:25.675999+0000 mon.a (mon.0) 477 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:26.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:26 vm07 bash[46158]: cluster 2026-03-10T11:51:25.675999+0000 mon.a (mon.0) 477 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:26.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:26 vm05 bash[68966]: cluster 2026-03-10T11:51:25.675999+0000 mon.a (mon.0) 477 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:26.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:26 vm05 bash[68966]: cluster 2026-03-10T11:51:25.675999+0000 mon.a (mon.0) 477 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:26.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:26 vm05 bash[65415]: cluster 2026-03-10T11:51:25.675999+0000 mon.a (mon.0) 477 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:26.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:26 vm05 bash[65415]: cluster 2026-03-10T11:51:25.675999+0000 mon.a (mon.0) 477 : cluster [WRN] Health check update: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded (PG_DEGRADED) 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: cluster 2026-03-10T11:51:25.468078+0000 mgr.y (mgr.44107) 300 : cluster [DBG] pgmap v161: 161 pgs: 15 peering, 146 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: cluster 2026-03-10T11:51:25.468078+0000 mgr.y (mgr.44107) 300 : cluster [DBG] pgmap v161: 161 pgs: 15 peering, 146 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: cluster 2026-03-10T11:51:26.106181+0000 mon.a (mon.0) 478 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded) 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: cluster 2026-03-10T11:51:26.106181+0000 mon.a (mon.0) 478 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded) 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.612244+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.612244+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.617775+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.617775+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.619776+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.619776+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.620763+0000 mon.c (mon.1) 313 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.620763+0000 mon.c (mon.1) 313 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.624987+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.624987+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.666561+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.666561+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.668200+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.668200+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.669317+0000 mon.c (mon.1) 316 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.669317+0000 mon.c (mon.1) 316 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.670390+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.670390+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.671637+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.671637+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.676817+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.676817+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.679714+0000 mon.c (mon.1) 319 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.679714+0000 mon.c (mon.1) 319 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.679918+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.679918+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.682587+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.682587+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.685273+0000 mon.c (mon.1) 320 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.685273+0000 mon.c (mon.1) 320 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.685461+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.685461+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.687981+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.687981+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.753537+0000 mon.c (mon.1) 321 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.753537+0000 mon.c (mon.1) 321 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.753771+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.753771+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.762847+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.762847+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.769363+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.769363+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.769586+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.769586+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.773173+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.773173+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.822600+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.822600+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.822813+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.822813+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.826846+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.826846+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.832191+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.832191+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.832412+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.832412+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.835631+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.835631+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.838526+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.838526+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.838731+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.838731+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.841638+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.841638+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.843267+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.843267+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.843454+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.843454+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.846184+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.846184+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.848825+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.848825+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.849022+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:27 vm07 bash[46158]: audit 2026-03-10T11:51:26.849022+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: cluster 2026-03-10T11:51:25.468078+0000 mgr.y (mgr.44107) 300 : cluster [DBG] pgmap v161: 161 pgs: 15 peering, 146 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T11:51:27.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: cluster 2026-03-10T11:51:25.468078+0000 mgr.y (mgr.44107) 300 : cluster [DBG] pgmap v161: 161 pgs: 15 peering, 146 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: cluster 2026-03-10T11:51:26.106181+0000 mon.a (mon.0) 478 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded) 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: cluster 2026-03-10T11:51:26.106181+0000 mon.a (mon.0) 478 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded) 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.612244+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.612244+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.617775+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.617775+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.619776+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.619776+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.620763+0000 mon.c (mon.1) 313 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.620763+0000 mon.c (mon.1) 313 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.624987+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.624987+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.666561+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.666561+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.668200+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.668200+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.669317+0000 mon.c (mon.1) 316 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.669317+0000 mon.c (mon.1) 316 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.670390+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.670390+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.671637+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.671637+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.676817+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.676817+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.679714+0000 mon.c (mon.1) 319 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.679714+0000 mon.c (mon.1) 319 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.679918+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.679918+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.682587+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.682587+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.685273+0000 mon.c (mon.1) 320 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.685273+0000 mon.c (mon.1) 320 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.685461+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.685461+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.687981+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.687981+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.753537+0000 mon.c (mon.1) 321 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.753537+0000 mon.c (mon.1) 321 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.753771+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.753771+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.762847+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.762847+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.769363+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.769363+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.769586+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.769586+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.773173+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.773173+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.822600+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.822600+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.822813+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.822813+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.826846+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T11:51:27.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.826846+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.832191+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: cluster 2026-03-10T11:51:25.468078+0000 mgr.y (mgr.44107) 300 : cluster [DBG] pgmap v161: 161 pgs: 15 peering, 146 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: cluster 2026-03-10T11:51:25.468078+0000 mgr.y (mgr.44107) 300 : cluster [DBG] pgmap v161: 161 pgs: 15 peering, 146 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.4 KiB/s rd, 1 op/s 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: cluster 2026-03-10T11:51:26.106181+0000 mon.a (mon.0) 478 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded) 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: cluster 2026-03-10T11:51:26.106181+0000 mon.a (mon.0) 478 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 44/627 objects degraded (7.018%), 11 pgs degraded) 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.612244+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.612244+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.617775+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.617775+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.619776+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.619776+0000 mon.c (mon.1) 312 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.620763+0000 mon.c (mon.1) 313 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.620763+0000 mon.c (mon.1) 313 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.624987+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.624987+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.666561+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.666561+0000 mon.c (mon.1) 314 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.668200+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.668200+0000 mon.c (mon.1) 315 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.669317+0000 mon.c (mon.1) 316 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.669317+0000 mon.c (mon.1) 316 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.670390+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.670390+0000 mon.c (mon.1) 317 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.671637+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.671637+0000 mon.c (mon.1) 318 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.676817+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.676817+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.679714+0000 mon.c (mon.1) 319 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.679714+0000 mon.c (mon.1) 319 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.679918+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.679918+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.682587+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.682587+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.0"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.685273+0000 mon.c (mon.1) 320 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.685273+0000 mon.c (mon.1) 320 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.685461+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.685461+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.687981+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.687981+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.1"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.753537+0000 mon.c (mon.1) 321 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.753537+0000 mon.c (mon.1) 321 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.753771+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.753771+0000 mon.a (mon.0) 487 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.762847+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.762847+0000 mon.a (mon.0) 488 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.2"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.769363+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.769363+0000 mon.c (mon.1) 322 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.769586+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.769586+0000 mon.a (mon.0) 489 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.773173+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.773173+0000 mon.a (mon.0) 490 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.3"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.822600+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.822600+0000 mon.c (mon.1) 323 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.822813+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.822813+0000 mon.a (mon.0) 491 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.826846+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.826846+0000 mon.a (mon.0) 492 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.4"}]': finished 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.832191+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.592 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.832191+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.832412+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.832412+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.835631+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.835631+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.838526+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.838526+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.838731+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.838731+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.841638+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.841638+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.843267+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.843267+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.843454+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.843454+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.846184+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.846184+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.848825+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.848825+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.849022+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:27 vm05 bash[65415]: audit 2026-03-10T11:51:26.849022+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.832191+0000 mon.c (mon.1) 324 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.832412+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.832412+0000 mon.a (mon.0) 493 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.835631+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.835631+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.5"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.838526+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.838526+0000 mon.c (mon.1) 325 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.838731+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.838731+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.841638+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.841638+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.6"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.843267+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.843267+0000 mon.c (mon.1) 326 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.843454+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.843454+0000 mon.a (mon.0) 497 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.846184+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.846184+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd.7"}]': finished 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.848825+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.848825+0000 mon.c (mon.1) 327 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.849022+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:27.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:27 vm05 bash[68966]: audit 2026-03-10T11:51:26.849022+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "osd require-osd-release", "release": "squid"}]: dispatch 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cephadm 2026-03-10T11:51:26.672393+0000 mgr.y (mgr.44107) 301 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cephadm 2026-03-10T11:51:26.672393+0000 mgr.y (mgr.44107) 301 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cephadm 2026-03-10T11:51:26.848673+0000 mgr.y (mgr.44107) 302 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cephadm 2026-03-10T11:51:26.848673+0000 mgr.y (mgr.44107) 302 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cluster 2026-03-10T11:51:27.845102+0000 mon.a (mon.0) 500 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cluster 2026-03-10T11:51:27.845102+0000 mon.a (mon.0) 500 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cluster 2026-03-10T11:51:27.845121+0000 mon.a (mon.0) 501 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cluster 2026-03-10T11:51:27.845121+0000 mon.a (mon.0) 501 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.847425+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.847425+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cluster 2026-03-10T11:51:27.850023+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: cluster 2026-03-10T11:51:27.850023+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.854948+0000 mon.c (mon.1) 328 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.854948+0000 mon.c (mon.1) 328 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.858851+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.858851+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.862782+0000 mon.c (mon.1) 329 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.862782+0000 mon.c (mon.1) 329 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.866461+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.866461+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.869825+0000 mon.c (mon.1) 330 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.869825+0000 mon.c (mon.1) 330 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.870931+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.870931+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.876732+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.876732+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.879534+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.879534+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.882839+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.882839+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.885726+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.885726+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.889374+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.889374+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.892492+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.892492+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.893516+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.893516+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.894518+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.894518+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.895483+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.895483+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.896471+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.896471+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.897413+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.897413+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.899102+0000 mon.c (mon.1) 340 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.899102+0000 mon.c (mon.1) 340 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.899287+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.899287+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.902072+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.902072+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.904576+0000 mon.c (mon.1) 341 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.904576+0000 mon.c (mon.1) 341 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.904774+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.904774+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.907122+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.907122+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.909651+0000 mon.c (mon.1) 342 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.909651+0000 mon.c (mon.1) 342 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.909845+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.909845+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.912106+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.912106+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.914566+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.914566+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.914756+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.914756+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.917840+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.917840+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:51:28.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.920434+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.920434+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.920637+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.920637+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.922843+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.922843+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.925320+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.925320+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.925512+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.925512+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.926362+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.926362+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.926561+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.926561+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.928937+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.928937+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.931345+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.931345+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.931533+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.931533+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.932387+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.932387+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.932577+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.932577+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.934796+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.934796+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.937372+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.937372+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.937573+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.937573+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.938408+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.938408+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.938600+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.938600+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.940904+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.940904+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.943361+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.943361+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.943553+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.943553+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.945848+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.945848+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.948443+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.948443+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.948639+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.948639+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.949495+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.949495+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.949686+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.949686+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.950516+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.950516+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.950713+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.950713+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.951526+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.951526+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.951730+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.951730+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.952549+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.952549+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.952735+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.952735+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.953556+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.953556+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.953749+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.953749+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.954758+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.954758+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.954951+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.954951+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.957367+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.957367+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.958903+0000 mon.c (mon.1) 359 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:28.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:28 vm07 bash[46158]: audit 2026-03-10T11:51:27.958903+0000 mon.c (mon.1) 359 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cephadm 2026-03-10T11:51:26.672393+0000 mgr.y (mgr.44107) 301 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cephadm 2026-03-10T11:51:26.672393+0000 mgr.y (mgr.44107) 301 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cephadm 2026-03-10T11:51:26.848673+0000 mgr.y (mgr.44107) 302 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cephadm 2026-03-10T11:51:26.848673+0000 mgr.y (mgr.44107) 302 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cluster 2026-03-10T11:51:27.845102+0000 mon.a (mon.0) 500 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cluster 2026-03-10T11:51:27.845102+0000 mon.a (mon.0) 500 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cluster 2026-03-10T11:51:27.845121+0000 mon.a (mon.0) 501 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cluster 2026-03-10T11:51:27.845121+0000 mon.a (mon.0) 501 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.847425+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.847425+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cluster 2026-03-10T11:51:27.850023+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: cluster 2026-03-10T11:51:27.850023+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.854948+0000 mon.c (mon.1) 328 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.854948+0000 mon.c (mon.1) 328 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.858851+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.858851+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.862782+0000 mon.c (mon.1) 329 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.862782+0000 mon.c (mon.1) 329 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.866461+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.866461+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.869825+0000 mon.c (mon.1) 330 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.869825+0000 mon.c (mon.1) 330 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.870931+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.870931+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.876732+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.876732+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.879534+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.879534+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.882839+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.882839+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.885726+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.885726+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.889374+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.889374+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.892492+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.892492+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.893516+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.893516+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.894518+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.894518+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.895483+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.895483+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.896471+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.896471+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.897413+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.897413+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.899102+0000 mon.c (mon.1) 340 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.899102+0000 mon.c (mon.1) 340 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.899287+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.899287+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.902072+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.902072+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.904576+0000 mon.c (mon.1) 341 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.904576+0000 mon.c (mon.1) 341 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.904774+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.904774+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.907122+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.907122+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.909651+0000 mon.c (mon.1) 342 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.909651+0000 mon.c (mon.1) 342 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.909845+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.909845+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.912106+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.912106+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.914566+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.914566+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.914756+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.914756+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.917840+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.917840+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.920434+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.920434+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.920637+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.920637+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.922843+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.922843+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.925320+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.925320+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.925512+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.925512+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.926362+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.926362+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.926561+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.926561+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.928937+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.928937+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.931345+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.931345+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.931533+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.931533+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.932387+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.932387+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.932577+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.932577+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.934796+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.934796+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.937372+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.937372+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.937573+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.937573+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.938408+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.938408+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.938600+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.938600+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.940904+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.940904+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.943361+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.943361+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.943553+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.943553+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.945848+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.945848+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.948443+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.948443+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.948639+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.948639+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.949495+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.949495+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.949686+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.949686+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.950516+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.950516+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.950713+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.950713+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.951526+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.951526+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.951730+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.951730+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.952549+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.952549+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.952735+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.952735+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.953556+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.953556+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.953749+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.953749+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.954758+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.954758+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.954951+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.954951+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.957367+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.957367+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.958903+0000 mon.c (mon.1) 359 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:28 vm05 bash[68966]: audit 2026-03-10T11:51:27.958903+0000 mon.c (mon.1) 359 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cephadm 2026-03-10T11:51:26.672393+0000 mgr.y (mgr.44107) 301 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cephadm 2026-03-10T11:51:26.672393+0000 mgr.y (mgr.44107) 301 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cephadm 2026-03-10T11:51:26.848673+0000 mgr.y (mgr.44107) 302 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cephadm 2026-03-10T11:51:26.848673+0000 mgr.y (mgr.44107) 302 : cephadm [INF] Upgrade: Setting require_osd_release to 19 squid 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cluster 2026-03-10T11:51:27.845102+0000 mon.a (mon.0) 500 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cluster 2026-03-10T11:51:27.845102+0000 mon.a (mon.0) 500 : cluster [INF] Health check cleared: OSD_UPGRADE_FINISHED (was: all OSDs are running squid or later but require_osd_release < squid) 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cluster 2026-03-10T11:51:27.845121+0000 mon.a (mon.0) 501 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cluster 2026-03-10T11:51:27.845121+0000 mon.a (mon.0) 501 : cluster [INF] Cluster is now healthy 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.847425+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.847425+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "osd require-osd-release", "release": "squid"}]': finished 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cluster 2026-03-10T11:51:27.850023+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: cluster 2026-03-10T11:51:27.850023+0000 mon.a (mon.0) 503 : cluster [DBG] osdmap e143: 8 total, 8 up, 8 in 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.854948+0000 mon.c (mon.1) 328 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.854948+0000 mon.c (mon.1) 328 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.858851+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.858851+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.862782+0000 mon.c (mon.1) 329 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.862782+0000 mon.c (mon.1) 329 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.866461+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.866461+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.869825+0000 mon.c (mon.1) 330 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.869825+0000 mon.c (mon.1) 330 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.870931+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.870931+0000 mon.c (mon.1) 331 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.876732+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.876732+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.879534+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.879534+0000 mon.c (mon.1) 332 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.882839+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.882839+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.885726+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.885726+0000 mon.c (mon.1) 333 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.889374+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.889374+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.892492+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.892492+0000 mon.c (mon.1) 334 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.893516+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.893516+0000 mon.c (mon.1) 335 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.894518+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.894518+0000 mon.c (mon.1) 336 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.895483+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.895483+0000 mon.c (mon.1) 337 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.896471+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.896471+0000 mon.c (mon.1) 338 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.897413+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.897413+0000 mon.c (mon.1) 339 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.899102+0000 mon.c (mon.1) 340 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.899102+0000 mon.c (mon.1) 340 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.899287+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.899287+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.902072+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.902072+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.904576+0000 mon.c (mon.1) 341 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.904576+0000 mon.c (mon.1) 341 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.904774+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.904774+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.907122+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.907122+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.909651+0000 mon.c (mon.1) 342 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.909651+0000 mon.c (mon.1) 342 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.909845+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.909845+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.912106+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.912106+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.914566+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.914566+0000 mon.c (mon.1) 343 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.914756+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.914756+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.917840+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.917840+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.920434+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.920434+0000 mon.c (mon.1) 344 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.920637+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.920637+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.922843+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.922843+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.925320+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.925320+0000 mon.c (mon.1) 345 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.925512+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.925512+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.926362+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.926362+0000 mon.c (mon.1) 346 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.926561+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.926561+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.928937+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.928937+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.931345+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.931345+0000 mon.c (mon.1) 347 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.931533+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.931533+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.932387+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.932387+0000 mon.c (mon.1) 348 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.932577+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.932577+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.934796+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.934796+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.937372+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.937372+0000 mon.c (mon.1) 349 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.937573+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.937573+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.938408+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.938408+0000 mon.c (mon.1) 350 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.938600+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.938600+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.940904+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.940904+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.943361+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.943361+0000 mon.c (mon.1) 351 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.943553+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.943553+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.945848+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.945848+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.948443+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.948443+0000 mon.c (mon.1) 352 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.948639+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.948639+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.949495+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.949495+0000 mon.c (mon.1) 353 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.949686+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.949686+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.950516+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.950516+0000 mon.c (mon.1) 354 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.950713+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.950713+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.951526+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.951526+0000 mon.c (mon.1) 355 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.951730+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.951730+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.952549+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.952549+0000 mon.c (mon.1) 356 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.952735+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.952735+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.953556+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.953556+0000 mon.c (mon.1) 357 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.953749+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.953749+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.954758+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.954758+0000 mon.c (mon.1) 358 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.954951+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.954951+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.957367+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.957367+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.958903+0000 mon.c (mon.1) 359 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:28.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:28 vm05 bash[65415]: audit 2026-03-10T11:51:27.958903+0000 mon.c (mon.1) 359 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.123 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:51:28] "GET /metrics HTTP/1.1" 200 37740 "" "Prometheus/2.51.0" 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cluster 2026-03-10T11:51:27.468593+0000 mgr.y (mgr.44107) 303 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cluster 2026-03-10T11:51:27.468593+0000 mgr.y (mgr.44107) 303 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.855538+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.855538+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.863287+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.863287+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.873475+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.873475+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.880206+0000 mgr.y (mgr.44107) 307 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.880206+0000 mgr.y (mgr.44107) 307 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.886405+0000 mgr.y (mgr.44107) 308 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.886405+0000 mgr.y (mgr.44107) 308 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.898054+0000 mgr.y (mgr.44107) 309 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.898054+0000 mgr.y (mgr.44107) 309 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.954443+0000 mgr.y (mgr.44107) 310 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: cephadm 2026-03-10T11:51:27.954443+0000 mgr.y (mgr.44107) 310 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:51:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.266206+0000 mon.c (mon.1) 360 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.266206+0000 mon.c (mon.1) 360 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.267287+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.267287+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.282962+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.282962+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.326204+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.326204+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.327671+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.327671+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.328490+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.328490+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.333369+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:29 vm07 bash[46158]: audit 2026-03-10T11:51:28.333369+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cluster 2026-03-10T11:51:27.468593+0000 mgr.y (mgr.44107) 303 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cluster 2026-03-10T11:51:27.468593+0000 mgr.y (mgr.44107) 303 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.855538+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.855538+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.863287+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.863287+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.873475+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.873475+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.880206+0000 mgr.y (mgr.44107) 307 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.880206+0000 mgr.y (mgr.44107) 307 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.886405+0000 mgr.y (mgr.44107) 308 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.886405+0000 mgr.y (mgr.44107) 308 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.898054+0000 mgr.y (mgr.44107) 309 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.898054+0000 mgr.y (mgr.44107) 309 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.954443+0000 mgr.y (mgr.44107) 310 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: cephadm 2026-03-10T11:51:27.954443+0000 mgr.y (mgr.44107) 310 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.266206+0000 mon.c (mon.1) 360 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.266206+0000 mon.c (mon.1) 360 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.267287+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.267287+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.282962+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.282962+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.326204+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.326204+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.327671+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.327671+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.328490+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.328490+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.333369+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:29 vm05 bash[65415]: audit 2026-03-10T11:51:28.333369+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cluster 2026-03-10T11:51:27.468593+0000 mgr.y (mgr.44107) 303 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cluster 2026-03-10T11:51:27.468593+0000 mgr.y (mgr.44107) 303 : cluster [DBG] pgmap v162: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.855538+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.855538+0000 mgr.y (mgr.44107) 304 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.863287+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.863287+0000 mgr.y (mgr.44107) 305 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.873475+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.873475+0000 mgr.y (mgr.44107) 306 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.880206+0000 mgr.y (mgr.44107) 307 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.880206+0000 mgr.y (mgr.44107) 307 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.886405+0000 mgr.y (mgr.44107) 308 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.886405+0000 mgr.y (mgr.44107) 308 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.898054+0000 mgr.y (mgr.44107) 309 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.898054+0000 mgr.y (mgr.44107) 309 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.954443+0000 mgr.y (mgr.44107) 310 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: cephadm 2026-03-10T11:51:27.954443+0000 mgr.y (mgr.44107) 310 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.266206+0000 mon.c (mon.1) 360 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.266206+0000 mon.c (mon.1) 360 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.267287+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.267287+0000 mon.c (mon.1) 361 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.282962+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.282962+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.326204+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.326204+0000 mon.c (mon.1) 362 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.327671+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.327671+0000 mon.c (mon.1) 363 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.328490+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.328490+0000 mon.c (mon.1) 364 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.333369+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:29.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:29 vm05 bash[68966]: audit 2026-03-10T11:51:28.333369+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:31 vm07 bash[46158]: audit 2026-03-10T11:51:29.185233+0000 mgr.y (mgr.44107) 311 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:31 vm07 bash[46158]: audit 2026-03-10T11:51:29.185233+0000 mgr.y (mgr.44107) 311 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:31 vm07 bash[46158]: cluster 2026-03-10T11:51:29.468920+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:31 vm07 bash[46158]: cluster 2026-03-10T11:51:29.468920+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:31 vm07 bash[46158]: audit 2026-03-10T11:51:30.894610+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:31 vm07 bash[46158]: audit 2026-03-10T11:51:30.894610+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:31 vm05 bash[65415]: audit 2026-03-10T11:51:29.185233+0000 mgr.y (mgr.44107) 311 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:31 vm05 bash[65415]: audit 2026-03-10T11:51:29.185233+0000 mgr.y (mgr.44107) 311 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:31 vm05 bash[65415]: cluster 2026-03-10T11:51:29.468920+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:31 vm05 bash[65415]: cluster 2026-03-10T11:51:29.468920+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:31 vm05 bash[65415]: audit 2026-03-10T11:51:30.894610+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:31 vm05 bash[65415]: audit 2026-03-10T11:51:30.894610+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:31 vm05 bash[68966]: audit 2026-03-10T11:51:29.185233+0000 mgr.y (mgr.44107) 311 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:31 vm05 bash[68966]: audit 2026-03-10T11:51:29.185233+0000 mgr.y (mgr.44107) 311 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:31 vm05 bash[68966]: cluster 2026-03-10T11:51:29.468920+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:31 vm05 bash[68966]: cluster 2026-03-10T11:51:29.468920+0000 mgr.y (mgr.44107) 312 : cluster [DBG] pgmap v164: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:31 vm05 bash[68966]: audit 2026-03-10T11:51:30.894610+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:31 vm05 bash[68966]: audit 2026-03-10T11:51:30.894610+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:33 vm07 bash[46158]: cluster 2026-03-10T11:51:31.469325+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:51:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:33 vm07 bash[46158]: cluster 2026-03-10T11:51:31.469325+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:51:33.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:33 vm05 bash[65415]: cluster 2026-03-10T11:51:31.469325+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:51:33.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:33 vm05 bash[65415]: cluster 2026-03-10T11:51:31.469325+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:51:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:33 vm05 bash[68966]: cluster 2026-03-10T11:51:31.469325+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:51:33.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:33 vm05 bash[68966]: cluster 2026-03-10T11:51:31.469325+0000 mgr.y (mgr.44107) 313 : cluster [DBG] pgmap v165: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-10T11:51:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:35 vm07 bash[46158]: cluster 2026-03-10T11:51:33.469732+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:35 vm07 bash[46158]: cluster 2026-03-10T11:51:33.469732+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:35 vm05 bash[65415]: cluster 2026-03-10T11:51:33.469732+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:35.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:35 vm05 bash[65415]: cluster 2026-03-10T11:51:33.469732+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:35 vm05 bash[68966]: cluster 2026-03-10T11:51:33.469732+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:35.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:35 vm05 bash[68966]: cluster 2026-03-10T11:51:33.469732+0000 mgr.y (mgr.44107) 314 : cluster [DBG] pgmap v166: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: cluster 2026-03-10T11:51:35.470127+0000 mgr.y (mgr.44107) 315 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: cluster 2026-03-10T11:51:35.470127+0000 mgr.y (mgr.44107) 315 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: audit 2026-03-10T11:51:35.496400+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: audit 2026-03-10T11:51:35.496400+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: audit 2026-03-10T11:51:35.498407+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: audit 2026-03-10T11:51:35.498407+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: audit 2026-03-10T11:51:35.902788+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:36 vm05 bash[65415]: audit 2026-03-10T11:51:35.902788+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: cluster 2026-03-10T11:51:35.470127+0000 mgr.y (mgr.44107) 315 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: cluster 2026-03-10T11:51:35.470127+0000 mgr.y (mgr.44107) 315 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: audit 2026-03-10T11:51:35.496400+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: audit 2026-03-10T11:51:35.496400+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: audit 2026-03-10T11:51:35.498407+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: audit 2026-03-10T11:51:35.498407+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: audit 2026-03-10T11:51:35.902788+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:36 vm05 bash[68966]: audit 2026-03-10T11:51:35.902788+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: cluster 2026-03-10T11:51:35.470127+0000 mgr.y (mgr.44107) 315 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: cluster 2026-03-10T11:51:35.470127+0000 mgr.y (mgr.44107) 315 : cluster [DBG] pgmap v167: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: audit 2026-03-10T11:51:35.496400+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: audit 2026-03-10T11:51:35.496400+0000 mon.a (mon.0) 541 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: audit 2026-03-10T11:51:35.498407+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: audit 2026-03-10T11:51:35.498407+0000 mon.c (mon.1) 365 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: audit 2026-03-10T11:51:35.902788+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:36.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:36 vm07 bash[46158]: audit 2026-03-10T11:51:35.902788+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:38.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:38 vm05 bash[65415]: cluster 2026-03-10T11:51:37.470584+0000 mgr.y (mgr.44107) 316 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:38.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:38 vm05 bash[65415]: cluster 2026-03-10T11:51:37.470584+0000 mgr.y (mgr.44107) 316 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:38.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:38 vm05 bash[68966]: cluster 2026-03-10T11:51:37.470584+0000 mgr.y (mgr.44107) 316 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:38.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:38 vm05 bash[68966]: cluster 2026-03-10T11:51:37.470584+0000 mgr.y (mgr.44107) 316 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:38 vm07 bash[46158]: cluster 2026-03-10T11:51:37.470584+0000 mgr.y (mgr.44107) 316 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:38.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:38 vm07 bash[46158]: cluster 2026-03-10T11:51:37.470584+0000 mgr.y (mgr.44107) 316 : cluster [DBG] pgmap v168: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-10T11:51:39.183 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:51:38] "GET /metrics HTTP/1.1" 200 37893 "" "Prometheus/2.51.0" 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:40 vm05 bash[65415]: audit 2026-03-10T11:51:39.188524+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:40 vm05 bash[65415]: audit 2026-03-10T11:51:39.188524+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:40 vm05 bash[65415]: cluster 2026-03-10T11:51:39.470906+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 881 B/s rd, 0 op/s 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:40 vm05 bash[65415]: cluster 2026-03-10T11:51:39.470906+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 881 B/s rd, 0 op/s 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:40 vm05 bash[68966]: audit 2026-03-10T11:51:39.188524+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:40 vm05 bash[68966]: audit 2026-03-10T11:51:39.188524+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:40 vm05 bash[68966]: cluster 2026-03-10T11:51:39.470906+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 881 B/s rd, 0 op/s 2026-03-10T11:51:40.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:40 vm05 bash[68966]: cluster 2026-03-10T11:51:39.470906+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 881 B/s rd, 0 op/s 2026-03-10T11:51:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:40 vm07 bash[46158]: audit 2026-03-10T11:51:39.188524+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:40 vm07 bash[46158]: audit 2026-03-10T11:51:39.188524+0000 mgr.y (mgr.44107) 317 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:40 vm07 bash[46158]: cluster 2026-03-10T11:51:39.470906+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 881 B/s rd, 0 op/s 2026-03-10T11:51:40.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:40 vm07 bash[46158]: cluster 2026-03-10T11:51:39.470906+0000 mgr.y (mgr.44107) 318 : cluster [DBG] pgmap v169: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 881 B/s rd, 0 op/s 2026-03-10T11:51:42.335 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:51:42.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:42 vm05 bash[65415]: cluster 2026-03-10T11:51:41.471258+0000 mgr.y (mgr.44107) 319 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:42.561 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:42 vm05 bash[65415]: cluster 2026-03-10T11:51:41.471258+0000 mgr.y (mgr.44107) 319 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:42.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:42 vm05 bash[68966]: cluster 2026-03-10T11:51:41.471258+0000 mgr.y (mgr.44107) 319 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:42.561 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:42 vm05 bash[68966]: cluster 2026-03-10T11:51:41.471258+0000 mgr.y (mgr.44107) 319 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (18m) 83s ago 25m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (5m) 22s ago 24m 67.2M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (6m) 83s ago 24m 44.2M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (6m) 22s ago 27m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (15m) 83s ago 28m 532M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (4m) 83s ago 28m 49.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (5m) 22s ago 27m 49.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:51:42.755 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (4m) 83s ago 27m 45.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (18m) 83s ago 25m 8024k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (18m) 22s ago 25m 7896k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 83s ago 27m 46.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (88s) 83s ago 27m 22.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (2m) 83s ago 27m 46.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (3m) 83s ago 26m 68.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (73s) 22s ago 26m 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (58s) 22s ago 26m 46.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (42s) 22s ago 26m 66.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (27s) 22s ago 25m 23.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e86e1860ea0d 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (6m) 22s ago 25m 45.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (24m) 83s ago 24m 89.4M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:51:42.756 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (24m) 22s ago 24m 90.7M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:51:42.802 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | length == 1'"'"'' 2026-03-10T11:51:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:42 vm07 bash[46158]: cluster 2026-03-10T11:51:41.471258+0000 mgr.y (mgr.44107) 319 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:42 vm07 bash[46158]: cluster 2026-03-10T11:51:41.471258+0000 mgr.y (mgr.44107) 319 : cluster [DBG] pgmap v170: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:43.250 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:51:43.288 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.osd | keys'"'"' | grep $sha1' 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:43 vm05 bash[68966]: audit 2026-03-10T11:51:42.277001+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.54477 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:43 vm05 bash[68966]: audit 2026-03-10T11:51:42.277001+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.54477 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:43 vm05 bash[68966]: audit 2026-03-10T11:51:42.756837+0000 mgr.y (mgr.44107) 321 : audit [DBG] from='client.54483 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:43 vm05 bash[68966]: audit 2026-03-10T11:51:42.756837+0000 mgr.y (mgr.44107) 321 : audit [DBG] from='client.54483 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:43 vm05 bash[68966]: audit 2026-03-10T11:51:43.241006+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.105:0/2806496304' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:43 vm05 bash[68966]: audit 2026-03-10T11:51:43.241006+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.105:0/2806496304' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:43 vm05 bash[65415]: audit 2026-03-10T11:51:42.277001+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.54477 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:43 vm05 bash[65415]: audit 2026-03-10T11:51:42.277001+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.54477 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:43 vm05 bash[65415]: audit 2026-03-10T11:51:42.756837+0000 mgr.y (mgr.44107) 321 : audit [DBG] from='client.54483 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:43 vm05 bash[65415]: audit 2026-03-10T11:51:42.756837+0000 mgr.y (mgr.44107) 321 : audit [DBG] from='client.54483 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:43 vm05 bash[65415]: audit 2026-03-10T11:51:43.241006+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.105:0/2806496304' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:43.514 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:43 vm05 bash[65415]: audit 2026-03-10T11:51:43.241006+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.105:0/2806496304' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:43.795 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T11:51:43.840 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:51:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:43 vm07 bash[46158]: audit 2026-03-10T11:51:42.277001+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.54477 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:43 vm07 bash[46158]: audit 2026-03-10T11:51:42.277001+0000 mgr.y (mgr.44107) 320 : audit [DBG] from='client.54477 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:43 vm07 bash[46158]: audit 2026-03-10T11:51:42.756837+0000 mgr.y (mgr.44107) 321 : audit [DBG] from='client.54483 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:43 vm07 bash[46158]: audit 2026-03-10T11:51:42.756837+0000 mgr.y (mgr.44107) 321 : audit [DBG] from='client.54483 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:43 vm07 bash[46158]: audit 2026-03-10T11:51:43.241006+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.105:0/2806496304' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:43.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:43 vm07 bash[46158]: audit 2026-03-10T11:51:43.241006+0000 mon.b (mon.2) 36 : audit [DBG] from='client.? 192.168.123.105:0/2806496304' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:51:44.254 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:51:44.302 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:44 vm05 bash[65415]: cluster 2026-03-10T11:51:43.473776+0000 mgr.y (mgr.44107) 322 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:44 vm05 bash[65415]: cluster 2026-03-10T11:51:43.473776+0000 mgr.y (mgr.44107) 322 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:44 vm05 bash[65415]: audit 2026-03-10T11:51:43.789906+0000 mon.c (mon.1) 366 : audit [DBG] from='client.? 192.168.123.105:0/3647115451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:44 vm05 bash[65415]: audit 2026-03-10T11:51:43.789906+0000 mon.c (mon.1) 366 : audit [DBG] from='client.? 192.168.123.105:0/3647115451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:44 vm05 bash[68966]: cluster 2026-03-10T11:51:43.473776+0000 mgr.y (mgr.44107) 322 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:44 vm05 bash[68966]: cluster 2026-03-10T11:51:43.473776+0000 mgr.y (mgr.44107) 322 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:44 vm05 bash[68966]: audit 2026-03-10T11:51:43.789906+0000 mon.c (mon.1) 366 : audit [DBG] from='client.? 192.168.123.105:0/3647115451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:44.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:44 vm05 bash[68966]: audit 2026-03-10T11:51:43.789906+0000 mon.c (mon.1) 366 : audit [DBG] from='client.? 192.168.123.105:0/3647115451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:44.757 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:51:44.829 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1 --services rgw.foo' 2026-03-10T11:51:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:44 vm07 bash[46158]: cluster 2026-03-10T11:51:43.473776+0000 mgr.y (mgr.44107) 322 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:44 vm07 bash[46158]: cluster 2026-03-10T11:51:43.473776+0000 mgr.y (mgr.44107) 322 : cluster [DBG] pgmap v171: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:44 vm07 bash[46158]: audit 2026-03-10T11:51:43.789906+0000 mon.c (mon.1) 366 : audit [DBG] from='client.? 192.168.123.105:0/3647115451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:44 vm07 bash[46158]: audit 2026-03-10T11:51:43.789906+0000 mon.c (mon.1) 366 : audit [DBG] from='client.? 192.168.123.105:0/3647115451' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:45.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:45 vm05 bash[65415]: audit 2026-03-10T11:51:44.258925+0000 mgr.y (mgr.44107) 323 : audit [DBG] from='client.34457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:45.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:45 vm05 bash[65415]: audit 2026-03-10T11:51:44.258925+0000 mgr.y (mgr.44107) 323 : audit [DBG] from='client.34457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:45.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:45 vm05 bash[65415]: audit 2026-03-10T11:51:44.762654+0000 mon.a (mon.0) 543 : audit [DBG] from='client.? 192.168.123.105:0/2606545837' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:51:45.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:45 vm05 bash[65415]: audit 2026-03-10T11:51:44.762654+0000 mon.a (mon.0) 543 : audit [DBG] from='client.? 192.168.123.105:0/2606545837' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:51:45.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:45 vm05 bash[68966]: audit 2026-03-10T11:51:44.258925+0000 mgr.y (mgr.44107) 323 : audit [DBG] from='client.34457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:45.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:45 vm05 bash[68966]: audit 2026-03-10T11:51:44.258925+0000 mgr.y (mgr.44107) 323 : audit [DBG] from='client.34457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:45.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:45 vm05 bash[68966]: audit 2026-03-10T11:51:44.762654+0000 mon.a (mon.0) 543 : audit [DBG] from='client.? 192.168.123.105:0/2606545837' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:51:45.841 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:45 vm05 bash[68966]: audit 2026-03-10T11:51:44.762654+0000 mon.a (mon.0) 543 : audit [DBG] from='client.? 192.168.123.105:0/2606545837' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:51:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:45 vm07 bash[46158]: audit 2026-03-10T11:51:44.258925+0000 mgr.y (mgr.44107) 323 : audit [DBG] from='client.34457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:45 vm07 bash[46158]: audit 2026-03-10T11:51:44.258925+0000 mgr.y (mgr.44107) 323 : audit [DBG] from='client.34457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:45 vm07 bash[46158]: audit 2026-03-10T11:51:44.762654+0000 mon.a (mon.0) 543 : audit [DBG] from='client.? 192.168.123.105:0/2606545837' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:51:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:45 vm07 bash[46158]: audit 2026-03-10T11:51:44.762654+0000 mon.a (mon.0) 543 : audit [DBG] from='client.? 192.168.123.105:0/2606545837' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:51:46.740 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:46 vm05 bash[65415]: audit 2026-03-10T11:51:45.282001+0000 mgr.y (mgr.44107) 324 : audit [DBG] from='client.54510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:46 vm05 bash[65415]: audit 2026-03-10T11:51:45.282001+0000 mgr.y (mgr.44107) 324 : audit [DBG] from='client.54510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:46 vm05 bash[65415]: cluster 2026-03-10T11:51:45.474147+0000 mgr.y (mgr.44107) 325 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:46 vm05 bash[65415]: cluster 2026-03-10T11:51:45.474147+0000 mgr.y (mgr.44107) 325 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:46 vm05 bash[68966]: audit 2026-03-10T11:51:45.282001+0000 mgr.y (mgr.44107) 324 : audit [DBG] from='client.54510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:46 vm05 bash[68966]: audit 2026-03-10T11:51:45.282001+0000 mgr.y (mgr.44107) 324 : audit [DBG] from='client.54510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:46 vm05 bash[68966]: cluster 2026-03-10T11:51:45.474147+0000 mgr.y (mgr.44107) 325 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:46.815 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:46 vm05 bash[68966]: cluster 2026-03-10T11:51:45.474147+0000 mgr.y (mgr.44107) 325 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:46.816 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; sleep 30 ; done' 2026-03-10T11:51:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:46 vm07 bash[46158]: audit 2026-03-10T11:51:45.282001+0000 mgr.y (mgr.44107) 324 : audit [DBG] from='client.54510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:46 vm07 bash[46158]: audit 2026-03-10T11:51:45.282001+0000 mgr.y (mgr.44107) 324 : audit [DBG] from='client.54510 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "services": "rgw.foo", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:46 vm07 bash[46158]: cluster 2026-03-10T11:51:45.474147+0000 mgr.y (mgr.44107) 325 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:46 vm07 bash[46158]: cluster 2026-03-10T11:51:45.474147+0000 mgr.y (mgr.44107) 325 : cluster [DBG] pgmap v172: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:51:47.277 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (18m) 88s ago 25m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (5m) 27s ago 24m 67.2M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (6m) 88s ago 24m 44.2M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (6m) 27s ago 27m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (15m) 88s ago 28m 532M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (4m) 88s ago 28m 49.2M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (5m) 27s ago 28m 49.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (4m) 88s ago 28m 45.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (18m) 88s ago 25m 8024k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (18m) 27s ago 25m 7896k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 88s ago 27m 46.9M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (93s) 88s ago 27m 22.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (3m) 88s ago 27m 46.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (3m) 88s ago 26m 68.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (78s) 27s ago 26m 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (63s) 27s ago 26m 46.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (47s) 27s ago 26m 66.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (32s) 27s ago 25m 23.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e86e1860ea0d 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (6m) 27s ago 25m 45.4M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (24m) 88s ago 24m 89.4M - 17.2.0 e1d6a67b021e f2644e7eb2f2 2026-03-10T11:51:47.662 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (24m) 27s ago 24m 90.7M - 17.2.0 e1d6a67b021e 4a4d4c0acae7 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: cephadm 2026-03-10T11:51:46.736811+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: cephadm 2026-03-10T11:51:46.736811+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.741742+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.741742+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.745782+0000 mon.c (mon.1) 367 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.745782+0000 mon.c (mon.1) 367 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.749663+0000 mon.c (mon.1) 368 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.749663+0000 mon.c (mon.1) 368 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.751892+0000 mon.c (mon.1) 369 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.751892+0000 mon.c (mon.1) 369 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.756651+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: audit 2026-03-10T11:51:46.756651+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: cephadm 2026-03-10T11:51:46.811844+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:47 vm05 bash[65415]: cephadm 2026-03-10T11:51:46.811844+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: cephadm 2026-03-10T11:51:46.736811+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: cephadm 2026-03-10T11:51:46.736811+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.741742+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.741742+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.745782+0000 mon.c (mon.1) 367 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.745782+0000 mon.c (mon.1) 367 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.749663+0000 mon.c (mon.1) 368 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.749663+0000 mon.c (mon.1) 368 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.751892+0000 mon.c (mon.1) 369 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.751892+0000 mon.c (mon.1) 369 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.756651+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: audit 2026-03-10T11:51:46.756651+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: cephadm 2026-03-10T11:51:46.811844+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:47 vm05 bash[68966]: cephadm 2026-03-10T11:51:46.811844+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 17.2.0 (43e2e60a7559d3f46c9d53f1ca875fd499a1e35e) quincy (stable)": 2, 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 13 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:51:47.901 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading daemons in service(s) rgw.foo", 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:51:48.096 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: cephadm 2026-03-10T11:51:46.736811+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: cephadm 2026-03-10T11:51:46.736811+0000 mgr.y (mgr.44107) 326 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.741742+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.741742+0000 mon.a (mon.0) 544 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.745782+0000 mon.c (mon.1) 367 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.745782+0000 mon.c (mon.1) 367 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.749663+0000 mon.c (mon.1) 368 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.749663+0000 mon.c (mon.1) 368 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.751892+0000 mon.c (mon.1) 369 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.751892+0000 mon.c (mon.1) 369 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.756651+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: audit 2026-03-10T11:51:46.756651+0000 mon.a (mon.0) 545 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: cephadm 2026-03-10T11:51:46.811844+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:48.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:47 vm07 bash[46158]: cephadm 2026-03-10T11:51:46.811844+0000 mgr.y (mgr.44107) 327 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.272781+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.54516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.272781+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.54516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.469208+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.54519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.469208+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.54519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: cluster 2026-03-10T11:51:47.474558+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: cluster 2026-03-10T11:51:47.474558+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.662682+0000 mgr.y (mgr.44107) 331 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.662682+0000 mgr.y (mgr.44107) 331 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.906168+0000 mon.a (mon.0) 546 : audit [DBG] from='client.? 192.168.123.105:0/3377841168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:47.906168+0000 mon.a (mon.0) 546 : audit [DBG] from='client.? 192.168.123.105:0/3377841168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.100549+0000 mgr.y (mgr.44107) 332 : audit [DBG] from='client.34481 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.100549+0000 mgr.y (mgr.44107) 332 : audit [DBG] from='client.34481 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.299235+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.972 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.299235+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.301651+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.301651+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.303082+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.303082+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.307346+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.307346+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.309875+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.309875+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.313867+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.313867+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.316333+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.316333+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.320313+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.320313+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.322865+0000 mon.c (mon.1) 374 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.322865+0000 mon.c (mon.1) 374 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.326816+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.326816+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.329943+0000 mon.c (mon.1) 375 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.329943+0000 mon.c (mon.1) 375 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.333854+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.333854+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.724117+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.724117+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.726485+0000 mon.c (mon.1) 376 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.726485+0000 mon.c (mon.1) 376 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.726706+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.726706+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.728323+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:48 vm05 bash[68966]: audit 2026-03-10T11:51:48.728323+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:51:48] "GET /metrics HTTP/1.1" 200 37893 "" "Prometheus/2.51.0" 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.272781+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.54516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.272781+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.54516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.469208+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.54519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.469208+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.54519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: cluster 2026-03-10T11:51:47.474558+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: cluster 2026-03-10T11:51:47.474558+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.662682+0000 mgr.y (mgr.44107) 331 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.662682+0000 mgr.y (mgr.44107) 331 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.906168+0000 mon.a (mon.0) 546 : audit [DBG] from='client.? 192.168.123.105:0/3377841168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:47.906168+0000 mon.a (mon.0) 546 : audit [DBG] from='client.? 192.168.123.105:0/3377841168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.100549+0000 mgr.y (mgr.44107) 332 : audit [DBG] from='client.34481 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.100549+0000 mgr.y (mgr.44107) 332 : audit [DBG] from='client.34481 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.299235+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.299235+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.301651+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.301651+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.303082+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.303082+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.307346+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.307346+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.309875+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.309875+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.313867+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.973 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.313867+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.316333+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.316333+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.320313+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.320313+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.322865+0000 mon.c (mon.1) 374 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.322865+0000 mon.c (mon.1) 374 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.326816+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.326816+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.329943+0000 mon.c (mon.1) 375 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.329943+0000 mon.c (mon.1) 375 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.333854+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.333854+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.724117+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.724117+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.726485+0000 mon.c (mon.1) 376 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.726485+0000 mon.c (mon.1) 376 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.726706+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.726706+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.728323+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:48.974 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:48 vm05 bash[65415]: audit 2026-03-10T11:51:48.728323+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.272781+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.54516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.272781+0000 mgr.y (mgr.44107) 328 : audit [DBG] from='client.54516 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.469208+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.54519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.469208+0000 mgr.y (mgr.44107) 329 : audit [DBG] from='client.54519 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: cluster 2026-03-10T11:51:47.474558+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: cluster 2026-03-10T11:51:47.474558+0000 mgr.y (mgr.44107) 330 : cluster [DBG] pgmap v173: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.662682+0000 mgr.y (mgr.44107) 331 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.662682+0000 mgr.y (mgr.44107) 331 : audit [DBG] from='client.54525 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.906168+0000 mon.a (mon.0) 546 : audit [DBG] from='client.? 192.168.123.105:0/3377841168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:47.906168+0000 mon.a (mon.0) 546 : audit [DBG] from='client.? 192.168.123.105:0/3377841168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.100549+0000 mgr.y (mgr.44107) 332 : audit [DBG] from='client.34481 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.100549+0000 mgr.y (mgr.44107) 332 : audit [DBG] from='client.34481 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.299235+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.299235+0000 mon.a (mon.0) 547 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.301651+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.301651+0000 mon.c (mon.1) 370 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.303082+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.303082+0000 mon.c (mon.1) 371 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.307346+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.307346+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.309875+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.309875+0000 mon.c (mon.1) 372 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.313867+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.313867+0000 mon.a (mon.0) 549 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.316333+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.316333+0000 mon.c (mon.1) 373 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.320313+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.320313+0000 mon.a (mon.0) 550 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.322865+0000 mon.c (mon.1) 374 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.322865+0000 mon.c (mon.1) 374 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.326816+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.326816+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.329943+0000 mon.c (mon.1) 375 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.329943+0000 mon.c (mon.1) 375 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.333854+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.333854+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.724117+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.724117+0000 mon.a (mon.0) 553 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.726485+0000 mon.c (mon.1) 376 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.726485+0000 mon.c (mon.1) 376 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.726706+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.726706+0000 mon.a (mon.0) 554 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm05.fdjkgz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.728323+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:49.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:48 vm07 bash[46158]: audit 2026-03-10T11:51:48.728323+0000 mon.c (mon.1) 377 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:49.265 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.265 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.265 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.266 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.266 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.266 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.266 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.266 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.266 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.300714+0000 mgr.y (mgr.44107) 333 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.300714+0000 mgr.y (mgr.44107) 333 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.300740+0000 mgr.y (mgr.44107) 334 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.300740+0000 mgr.y (mgr.44107) 334 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.303825+0000 mgr.y (mgr.44107) 335 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.303825+0000 mgr.y (mgr.44107) 335 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.310472+0000 mgr.y (mgr.44107) 336 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.310472+0000 mgr.y (mgr.44107) 336 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.316931+0000 mgr.y (mgr.44107) 337 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.316931+0000 mgr.y (mgr.44107) 337 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.323463+0000 mgr.y (mgr.44107) 338 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.323463+0000 mgr.y (mgr.44107) 338 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.330545+0000 mgr.y (mgr.44107) 339 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.330545+0000 mgr.y (mgr.44107) 339 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.719361+0000 mgr.y (mgr.44107) 340 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.fdjkgz (1/2) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.719361+0000 mgr.y (mgr.44107) 340 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.fdjkgz (1/2) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.729131+0000 mgr.y (mgr.44107) 341 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 bash[68966]: cephadm 2026-03-10T11:51:48.729131+0000 mgr.y (mgr.44107) 341 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.300714+0000 mgr.y (mgr.44107) 333 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.300714+0000 mgr.y (mgr.44107) 333 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.300740+0000 mgr.y (mgr.44107) 334 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.300740+0000 mgr.y (mgr.44107) 334 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.303825+0000 mgr.y (mgr.44107) 335 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.303825+0000 mgr.y (mgr.44107) 335 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.310472+0000 mgr.y (mgr.44107) 336 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.310472+0000 mgr.y (mgr.44107) 336 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.316931+0000 mgr.y (mgr.44107) 337 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.316931+0000 mgr.y (mgr.44107) 337 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.323463+0000 mgr.y (mgr.44107) 338 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.323463+0000 mgr.y (mgr.44107) 338 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.330545+0000 mgr.y (mgr.44107) 339 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.330545+0000 mgr.y (mgr.44107) 339 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.719361+0000 mgr.y (mgr.44107) 340 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.fdjkgz (1/2) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.719361+0000 mgr.y (mgr.44107) 340 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.fdjkgz (1/2) 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.729131+0000 mgr.y (mgr.44107) 341 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 bash[65415]: cephadm 2026-03-10T11:51:48.729131+0000 mgr.y (mgr.44107) 341 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:51:49.979 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.980 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.980 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.980 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.980 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.980 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:49.980 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:51:49 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.300714+0000 mgr.y (mgr.44107) 333 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.300714+0000 mgr.y (mgr.44107) 333 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.300740+0000 mgr.y (mgr.44107) 334 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.300740+0000 mgr.y (mgr.44107) 334 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.303825+0000 mgr.y (mgr.44107) 335 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.303825+0000 mgr.y (mgr.44107) 335 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.310472+0000 mgr.y (mgr.44107) 336 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:51:50.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.310472+0000 mgr.y (mgr.44107) 336 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.316931+0000 mgr.y (mgr.44107) 337 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.316931+0000 mgr.y (mgr.44107) 337 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.323463+0000 mgr.y (mgr.44107) 338 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.323463+0000 mgr.y (mgr.44107) 338 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.330545+0000 mgr.y (mgr.44107) 339 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.330545+0000 mgr.y (mgr.44107) 339 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.719361+0000 mgr.y (mgr.44107) 340 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.fdjkgz (1/2) 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.719361+0000 mgr.y (mgr.44107) 340 : cephadm [INF] Upgrade: Updating rgw.foo.vm05.fdjkgz (1/2) 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.729131+0000 mgr.y (mgr.44107) 341 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:51:50.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:49 vm07 bash[46158]: cephadm 2026-03-10T11:51:48.729131+0000 mgr.y (mgr.44107) 341 : cephadm [INF] Deploying daemon rgw.foo.vm05.fdjkgz on vm05 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:49.196826+0000 mgr.y (mgr.44107) 342 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:49.196826+0000 mgr.y (mgr.44107) 342 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: cluster 2026-03-10T11:51:49.474883+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: cluster 2026-03-10T11:51:49.474883+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.011594+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.011594+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.019533+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.019533+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.492283+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.492283+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.593650+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.593650+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.598828+0000 mon.c (mon.1) 379 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.598828+0000 mon.c (mon.1) 379 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.599183+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.599183+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.601815+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 bash[46158]: audit 2026-03-10T11:51:50.601815+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.161 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:49.196826+0000 mgr.y (mgr.44107) 342 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:49.196826+0000 mgr.y (mgr.44107) 342 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: cluster 2026-03-10T11:51:49.474883+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: cluster 2026-03-10T11:51:49.474883+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.011594+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.011594+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.019533+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.019533+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.492283+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.492283+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.593650+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.593650+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.598828+0000 mon.c (mon.1) 379 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.598828+0000 mon.c (mon.1) 379 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.599183+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.599183+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.601815+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:51 vm05 bash[68966]: audit 2026-03-10T11:51:50.601815+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:49.196826+0000 mgr.y (mgr.44107) 342 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:49.196826+0000 mgr.y (mgr.44107) 342 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: cluster 2026-03-10T11:51:49.474883+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: cluster 2026-03-10T11:51:49.474883+0000 mgr.y (mgr.44107) 343 : cluster [DBG] pgmap v174: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.011594+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.011594+0000 mon.a (mon.0) 555 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.019533+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.019533+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.492283+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.492283+0000 mon.c (mon.1) 378 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.593650+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.593650+0000 mon.a (mon.0) 557 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.598828+0000 mon.c (mon.1) 379 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.598828+0000 mon.c (mon.1) 379 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.599183+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.599183+0000 mon.a (mon.0) 558 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.vm07.mbukmh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.601815+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:51.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:51 vm05 bash[65415]: audit 2026-03-10T11:51:50.601815+0000 mon.c (mon.1) 380 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:51:51.697 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:51.697 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:51:51 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: cephadm 2026-03-10T11:51:50.587890+0000 mgr.y (mgr.44107) 344 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.mbukmh (2/2) 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: cephadm 2026-03-10T11:51:50.587890+0000 mgr.y (mgr.44107) 344 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.mbukmh (2/2) 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: cephadm 2026-03-10T11:51:50.603568+0000 mgr.y (mgr.44107) 345 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: cephadm 2026-03-10T11:51:50.603568+0000 mgr.y (mgr.44107) 345 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: audit 2026-03-10T11:51:51.663054+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: audit 2026-03-10T11:51:51.663054+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: audit 2026-03-10T11:51:51.670530+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: audit 2026-03-10T11:51:51.670530+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: audit 2026-03-10T11:51:51.673986+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:52.017 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:52 vm07 bash[46158]: audit 2026-03-10T11:51:51.673986+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: cephadm 2026-03-10T11:51:50.587890+0000 mgr.y (mgr.44107) 344 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.mbukmh (2/2) 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: cephadm 2026-03-10T11:51:50.587890+0000 mgr.y (mgr.44107) 344 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.mbukmh (2/2) 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: cephadm 2026-03-10T11:51:50.603568+0000 mgr.y (mgr.44107) 345 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: cephadm 2026-03-10T11:51:50.603568+0000 mgr.y (mgr.44107) 345 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: audit 2026-03-10T11:51:51.663054+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: audit 2026-03-10T11:51:51.663054+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: audit 2026-03-10T11:51:51.670530+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: audit 2026-03-10T11:51:51.670530+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: audit 2026-03-10T11:51:51.673986+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:52 vm05 bash[65415]: audit 2026-03-10T11:51:51.673986+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: cephadm 2026-03-10T11:51:50.587890+0000 mgr.y (mgr.44107) 344 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.mbukmh (2/2) 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: cephadm 2026-03-10T11:51:50.587890+0000 mgr.y (mgr.44107) 344 : cephadm [INF] Upgrade: Updating rgw.foo.vm07.mbukmh (2/2) 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: cephadm 2026-03-10T11:51:50.603568+0000 mgr.y (mgr.44107) 345 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: cephadm 2026-03-10T11:51:50.603568+0000 mgr.y (mgr.44107) 345 : cephadm [INF] Deploying daemon rgw.foo.vm07.mbukmh on vm07 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: audit 2026-03-10T11:51:51.663054+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: audit 2026-03-10T11:51:51.663054+0000 mon.a (mon.0) 559 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: audit 2026-03-10T11:51:51.670530+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: audit 2026-03-10T11:51:51.670530+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: audit 2026-03-10T11:51:51.673986+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:52.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:52 vm05 bash[68966]: audit 2026-03-10T11:51:51.673986+0000 mon.c (mon.1) 381 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:51:53.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:53 vm05 bash[65415]: cluster 2026-03-10T11:51:51.475312+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 85 B/s wr, 32 op/s 2026-03-10T11:51:53.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:53 vm05 bash[65415]: cluster 2026-03-10T11:51:51.475312+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 85 B/s wr, 32 op/s 2026-03-10T11:51:53.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:53 vm05 bash[68966]: cluster 2026-03-10T11:51:51.475312+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 85 B/s wr, 32 op/s 2026-03-10T11:51:53.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:53 vm05 bash[68966]: cluster 2026-03-10T11:51:51.475312+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 85 B/s wr, 32 op/s 2026-03-10T11:51:53.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:53 vm07 bash[46158]: cluster 2026-03-10T11:51:51.475312+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 85 B/s wr, 32 op/s 2026-03-10T11:51:53.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:53 vm07 bash[46158]: cluster 2026-03-10T11:51:51.475312+0000 mgr.y (mgr.44107) 346 : cluster [DBG] pgmap v175: 161 pgs: 161 active+clean; 457 KiB data, 275 MiB used, 160 GiB / 160 GiB avail; 22 KiB/s rd, 85 B/s wr, 32 op/s 2026-03-10T11:51:55.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:55 vm05 bash[65415]: cluster 2026-03-10T11:51:53.475751+0000 mgr.y (mgr.44107) 347 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 283 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 85 B/s wr, 73 op/s 2026-03-10T11:51:55.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:55 vm05 bash[65415]: cluster 2026-03-10T11:51:53.475751+0000 mgr.y (mgr.44107) 347 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 283 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 85 B/s wr, 73 op/s 2026-03-10T11:51:55.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:55 vm05 bash[68966]: cluster 2026-03-10T11:51:53.475751+0000 mgr.y (mgr.44107) 347 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 283 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 85 B/s wr, 73 op/s 2026-03-10T11:51:55.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:55 vm05 bash[68966]: cluster 2026-03-10T11:51:53.475751+0000 mgr.y (mgr.44107) 347 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 283 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 85 B/s wr, 73 op/s 2026-03-10T11:51:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:55 vm07 bash[46158]: cluster 2026-03-10T11:51:53.475751+0000 mgr.y (mgr.44107) 347 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 283 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 85 B/s wr, 73 op/s 2026-03-10T11:51:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:55 vm07 bash[46158]: cluster 2026-03-10T11:51:53.475751+0000 mgr.y (mgr.44107) 347 : cluster [DBG] pgmap v176: 161 pgs: 161 active+clean; 457 KiB data, 283 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 85 B/s wr, 73 op/s 2026-03-10T11:51:57.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:57 vm05 bash[68966]: cluster 2026-03-10T11:51:55.476154+0000 mgr.y (mgr.44107) 348 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 85 B/s wr, 126 op/s 2026-03-10T11:51:57.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:57 vm05 bash[68966]: cluster 2026-03-10T11:51:55.476154+0000 mgr.y (mgr.44107) 348 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 85 B/s wr, 126 op/s 2026-03-10T11:51:57.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:57 vm05 bash[65415]: cluster 2026-03-10T11:51:55.476154+0000 mgr.y (mgr.44107) 348 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 85 B/s wr, 126 op/s 2026-03-10T11:51:57.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:57 vm05 bash[65415]: cluster 2026-03-10T11:51:55.476154+0000 mgr.y (mgr.44107) 348 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 85 B/s wr, 126 op/s 2026-03-10T11:51:57.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:57 vm07 bash[46158]: cluster 2026-03-10T11:51:55.476154+0000 mgr.y (mgr.44107) 348 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 85 B/s wr, 126 op/s 2026-03-10T11:51:57.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:57 vm07 bash[46158]: cluster 2026-03-10T11:51:55.476154+0000 mgr.y (mgr.44107) 348 : cluster [DBG] pgmap v177: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 82 KiB/s rd, 85 B/s wr, 126 op/s 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.046495+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.046495+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.061045+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.061045+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.082748+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.082748+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.089693+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.089693+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.625919+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.625919+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.633785+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.633785+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.658141+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.658141+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.667050+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:58 vm05 bash[65415]: audit 2026-03-10T11:51:57.667050+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.046495+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.046495+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.061045+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.061045+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.082748+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.082748+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.089693+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.089693+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.625919+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.625919+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.633785+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.633785+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.658141+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.658141+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.667050+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:58 vm05 bash[68966]: audit 2026-03-10T11:51:57.667050+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.046495+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.046495+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.061045+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.061045+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.082748+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.082748+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.089693+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.089693+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.625919+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.625919+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.633785+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.633785+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.658141+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.658141+0000 mon.a (mon.0) 567 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.667050+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:58.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:58 vm07 bash[46158]: audit 2026-03-10T11:51:57.667050+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:51:59.200 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:59 vm05 bash[68966]: cluster 2026-03-10T11:51:57.476763+0000 mgr.y (mgr.44107) 349 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:51:59.200 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:51:59 vm05 bash[68966]: cluster 2026-03-10T11:51:57.476763+0000 mgr.y (mgr.44107) 349 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:51:59.200 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:59 vm05 bash[65415]: cluster 2026-03-10T11:51:57.476763+0000 mgr.y (mgr.44107) 349 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:51:59.200 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:51:59 vm05 bash[65415]: cluster 2026-03-10T11:51:57.476763+0000 mgr.y (mgr.44107) 349 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:51:59.200 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:51:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:51:58] "GET /metrics HTTP/1.1" 200 37892 "" "Prometheus/2.51.0" 2026-03-10T11:51:59.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:59 vm07 bash[46158]: cluster 2026-03-10T11:51:57.476763+0000 mgr.y (mgr.44107) 349 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:51:59.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:51:59 vm07 bash[46158]: cluster 2026-03-10T11:51:57.476763+0000 mgr.y (mgr.44107) 349 : cluster [DBG] pgmap v178: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:01 vm07 bash[46158]: audit 2026-03-10T11:51:59.205470+0000 mgr.y (mgr.44107) 350 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:01 vm07 bash[46158]: audit 2026-03-10T11:51:59.205470+0000 mgr.y (mgr.44107) 350 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:01 vm07 bash[46158]: cluster 2026-03-10T11:51:59.477082+0000 mgr.y (mgr.44107) 351 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:01 vm07 bash[46158]: cluster 2026-03-10T11:51:59.477082+0000 mgr.y (mgr.44107) 351 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:01.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:01 vm05 bash[68966]: audit 2026-03-10T11:51:59.205470+0000 mgr.y (mgr.44107) 350 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:01 vm05 bash[68966]: audit 2026-03-10T11:51:59.205470+0000 mgr.y (mgr.44107) 350 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:01 vm05 bash[68966]: cluster 2026-03-10T11:51:59.477082+0000 mgr.y (mgr.44107) 351 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:01 vm05 bash[68966]: cluster 2026-03-10T11:51:59.477082+0000 mgr.y (mgr.44107) 351 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:01 vm05 bash[65415]: audit 2026-03-10T11:51:59.205470+0000 mgr.y (mgr.44107) 350 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:01 vm05 bash[65415]: audit 2026-03-10T11:51:59.205470+0000 mgr.y (mgr.44107) 350 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:01 vm05 bash[65415]: cluster 2026-03-10T11:51:59.477082+0000 mgr.y (mgr.44107) 351 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:01.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:01 vm05 bash[65415]: cluster 2026-03-10T11:51:59.477082+0000 mgr.y (mgr.44107) 351 : cluster [DBG] pgmap v179: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 167 op/s 2026-03-10T11:52:03.429 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:03 vm05 bash[65415]: cluster 2026-03-10T11:52:01.477419+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 168 op/s 2026-03-10T11:52:03.429 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:03 vm05 bash[65415]: cluster 2026-03-10T11:52:01.477419+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 168 op/s 2026-03-10T11:52:03.430 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:03 vm05 bash[68966]: cluster 2026-03-10T11:52:01.477419+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 168 op/s 2026-03-10T11:52:03.430 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:03 vm05 bash[68966]: cluster 2026-03-10T11:52:01.477419+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 168 op/s 2026-03-10T11:52:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:03 vm07 bash[46158]: cluster 2026-03-10T11:52:01.477419+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 168 op/s 2026-03-10T11:52:03.445 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:03 vm07 bash[46158]: cluster 2026-03-10T11:52:01.477419+0000 mgr.y (mgr.44107) 352 : cluster [DBG] pgmap v180: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 109 KiB/s rd, 170 B/s wr, 168 op/s 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.165512+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.165512+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.173600+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.173600+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.249208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.249208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.255267+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.255267+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.256290+0000 mon.c (mon.1) 382 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.256290+0000 mon.c (mon.1) 382 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.256839+0000 mon.c (mon.1) 383 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.256839+0000 mon.c (mon.1) 383 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.260614+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.260614+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.300885+0000 mon.c (mon.1) 384 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.300885+0000 mon.c (mon.1) 384 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.302128+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.302128+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.303026+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.303026+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.303732+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.303732+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.305209+0000 mon.c (mon.1) 388 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.305209+0000 mon.c (mon.1) 388 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.306947+0000 mon.c (mon.1) 389 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.306947+0000 mon.c (mon.1) 389 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.307983+0000 mon.c (mon.1) 390 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.307983+0000 mon.c (mon.1) 390 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.308675+0000 mgr.y (mgr.44107) 353 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.308675+0000 mgr.y (mgr.44107) 353 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.312694+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.312694+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.314751+0000 mon.c (mon.1) 391 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.314751+0000 mon.c (mon.1) 391 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.314976+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.314976+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.317644+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]': finished 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.317644+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]': finished 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.319650+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.319650+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.319909+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.319909+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.322603+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]': finished 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.322603+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]': finished 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.324920+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.324920+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.325400+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.325400+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.328414+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.328414+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.330578+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.330578+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.331334+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.331334+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.331789+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.331789+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.335280+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.335280+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.337620+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.337620+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.338057+0000 mgr.y (mgr.44107) 356 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.338057+0000 mgr.y (mgr.44107) 356 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.341234+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.341234+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.343875+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.343875+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.344511+0000 mgr.y (mgr.44107) 357 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.344511+0000 mgr.y (mgr.44107) 357 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.348712+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.348712+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.350699+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.350699+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.351784+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.351784+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.352946+0000 mon.c (mon.1) 400 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.352946+0000 mon.c (mon.1) 400 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.354001+0000 mon.c (mon.1) 401 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.354001+0000 mon.c (mon.1) 401 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.355029+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.355029+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.356016+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.356016+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.356648+0000 mgr.y (mgr.44107) 358 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.356648+0000 mgr.y (mgr.44107) 358 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.357742+0000 mon.c (mon.1) 404 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.357742+0000 mon.c (mon.1) 404 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.358068+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.358068+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.361330+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.361330+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.363982+0000 mon.c (mon.1) 405 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.363982+0000 mon.c (mon.1) 405 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.364186+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.449 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.364186+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.366464+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.366464+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.369252+0000 mon.c (mon.1) 406 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.369252+0000 mon.c (mon.1) 406 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.369474+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.369474+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.372161+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.372161+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.375011+0000 mon.c (mon.1) 407 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.375011+0000 mon.c (mon.1) 407 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.375213+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.375213+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.377885+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.377885+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.381056+0000 mon.c (mon.1) 408 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.381056+0000 mon.c (mon.1) 408 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.381271+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.381271+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.383635+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.383635+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.386344+0000 mon.c (mon.1) 409 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.386344+0000 mon.c (mon.1) 409 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.386558+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.386558+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.388938+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.388938+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.391635+0000 mon.c (mon.1) 410 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.391635+0000 mon.c (mon.1) 410 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.391853+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.391853+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.394351+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.394351+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.397071+0000 mon.c (mon.1) 411 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.397071+0000 mon.c (mon.1) 411 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.397279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.397279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.398353+0000 mon.c (mon.1) 412 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.398353+0000 mon.c (mon.1) 412 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.398557+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.398557+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.401026+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:04.450 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.401026+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.403596+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.403596+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.403826+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.403826+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.404994+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.404994+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.405225+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.405225+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.407774+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.407774+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.411638+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.411638+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.411865+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.411865+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.414186+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.414186+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.417214+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.417214+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.417402+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.417402+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.418402+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.418402+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.418578+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.418578+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.419520+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.419520+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.419798+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.419798+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.420374+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.420374+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.420551+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.420551+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421078+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421078+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421236+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421236+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421733+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421733+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421894+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.421894+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.422366+0000 mgr.y (mgr.44107) 359 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.422366+0000 mgr.y (mgr.44107) 359 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.422685+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.422685+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.422836+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.422836+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.426415+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.426415+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.426757+0000 mon.c (mon.1) 423 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.426757+0000 mon.c (mon.1) 423 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.427868+0000 mon.c (mon.1) 424 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.427868+0000 mon.c (mon.1) 424 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.428356+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.428356+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.430951+0000 mgr.y (mgr.44107) 360 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cephadm 2026-03-10T11:52:03.430951+0000 mgr.y (mgr.44107) 360 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cluster 2026-03-10T11:52:03.477911+0000 mgr.y (mgr.44107) 361 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 88 KiB/s rd, 85 B/s wr, 136 op/s 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: cluster 2026-03-10T11:52:03.477911+0000 mgr.y (mgr.44107) 361 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 88 KiB/s rd, 85 B/s wr, 136 op/s 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.586397+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.586397+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.451 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.630320+0000 mon.c (mon.1) 426 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.630320+0000 mon.c (mon.1) 426 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.631924+0000 mon.c (mon.1) 427 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.631924+0000 mon.c (mon.1) 427 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.632803+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.632803+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.637781+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.452 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:04 vm07 bash[46158]: audit 2026-03-10T11:52:03.637781+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.165512+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.165512+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.173600+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.173600+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.249208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.249208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.255267+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.255267+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.256290+0000 mon.c (mon.1) 382 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.256290+0000 mon.c (mon.1) 382 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.256839+0000 mon.c (mon.1) 383 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.256839+0000 mon.c (mon.1) 383 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.260614+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.260614+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.300885+0000 mon.c (mon.1) 384 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.300885+0000 mon.c (mon.1) 384 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.302128+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.302128+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.303026+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.303026+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.303732+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.303732+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.305209+0000 mon.c (mon.1) 388 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.305209+0000 mon.c (mon.1) 388 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.306947+0000 mon.c (mon.1) 389 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.306947+0000 mon.c (mon.1) 389 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.307983+0000 mon.c (mon.1) 390 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.307983+0000 mon.c (mon.1) 390 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.308675+0000 mgr.y (mgr.44107) 353 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.308675+0000 mgr.y (mgr.44107) 353 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.312694+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.312694+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.314751+0000 mon.c (mon.1) 391 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.314751+0000 mon.c (mon.1) 391 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.314976+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.314976+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.317644+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]': finished 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.317644+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]': finished 2026-03-10T11:52:04.591 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.319650+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.319650+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.319909+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.319909+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.322603+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.322603+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.324920+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.324920+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.325400+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.325400+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.328414+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.328414+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.330578+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.330578+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.331334+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.331334+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.331789+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.331789+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.335280+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.335280+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.337620+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.337620+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.338057+0000 mgr.y (mgr.44107) 356 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.338057+0000 mgr.y (mgr.44107) 356 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.341234+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.341234+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.343875+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.343875+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.344511+0000 mgr.y (mgr.44107) 357 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.344511+0000 mgr.y (mgr.44107) 357 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.348712+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.348712+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.350699+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.350699+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.351784+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.351784+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.352946+0000 mon.c (mon.1) 400 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.352946+0000 mon.c (mon.1) 400 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.354001+0000 mon.c (mon.1) 401 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.354001+0000 mon.c (mon.1) 401 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.355029+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.355029+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.356016+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.356016+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.356648+0000 mgr.y (mgr.44107) 358 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.356648+0000 mgr.y (mgr.44107) 358 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.357742+0000 mon.c (mon.1) 404 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.357742+0000 mon.c (mon.1) 404 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.358068+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.358068+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.361330+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.361330+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.363982+0000 mon.c (mon.1) 405 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.363982+0000 mon.c (mon.1) 405 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.364186+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.364186+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.366464+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.366464+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.369252+0000 mon.c (mon.1) 406 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.369252+0000 mon.c (mon.1) 406 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.369474+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.369474+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.372161+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:04.592 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.372161+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.375011+0000 mon.c (mon.1) 407 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.375011+0000 mon.c (mon.1) 407 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.375213+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.375213+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.377885+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.377885+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.381056+0000 mon.c (mon.1) 408 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.381056+0000 mon.c (mon.1) 408 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.381271+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.381271+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.383635+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.383635+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.386344+0000 mon.c (mon.1) 409 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.386344+0000 mon.c (mon.1) 409 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.386558+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.386558+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.388938+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.388938+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.391635+0000 mon.c (mon.1) 410 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.391635+0000 mon.c (mon.1) 410 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.391853+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.391853+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.394351+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.394351+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.397071+0000 mon.c (mon.1) 411 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.397071+0000 mon.c (mon.1) 411 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.397279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.397279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.398353+0000 mon.c (mon.1) 412 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.398353+0000 mon.c (mon.1) 412 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.398557+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.398557+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.401026+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.401026+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.403596+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.403596+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.403826+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.403826+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.404994+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.404994+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.405225+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.405225+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.407774+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.407774+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.411638+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.411638+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.411865+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.411865+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.414186+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.414186+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.417214+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.417214+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.417402+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.165512+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.165512+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.593 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.173600+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.173600+0000 mon.a (mon.0) 570 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.249208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.249208+0000 mon.a (mon.0) 571 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.255267+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.255267+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.256290+0000 mon.c (mon.1) 382 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.256290+0000 mon.c (mon.1) 382 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.256839+0000 mon.c (mon.1) 383 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.256839+0000 mon.c (mon.1) 383 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.260614+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.260614+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.300885+0000 mon.c (mon.1) 384 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.300885+0000 mon.c (mon.1) 384 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.302128+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.302128+0000 mon.c (mon.1) 385 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.303026+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.303026+0000 mon.c (mon.1) 386 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.303732+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.303732+0000 mon.c (mon.1) 387 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.305209+0000 mon.c (mon.1) 388 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.305209+0000 mon.c (mon.1) 388 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.306947+0000 mon.c (mon.1) 389 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.306947+0000 mon.c (mon.1) 389 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.307983+0000 mon.c (mon.1) 390 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.307983+0000 mon.c (mon.1) 390 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.308675+0000 mgr.y (mgr.44107) 353 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.308675+0000 mgr.y (mgr.44107) 353 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.312694+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.312694+0000 mon.a (mon.0) 574 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.314751+0000 mon.c (mon.1) 391 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.314751+0000 mon.c (mon.1) 391 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.314976+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.314976+0000 mon.a (mon.0) 575 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.317644+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]': finished 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.317644+0000 mon.a (mon.0) 576 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm05.fdjkgz"}]': finished 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.319650+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.319650+0000 mon.c (mon.1) 392 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.319909+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.319909+0000 mon.a (mon.0) 577 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.322603+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]': finished 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.322603+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw.foo.vm07.mbukmh"}]': finished 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.324920+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.324920+0000 mon.c (mon.1) 393 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.325400+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.325400+0000 mgr.y (mgr.44107) 354 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.328414+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.328414+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.330578+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.330578+0000 mon.c (mon.1) 394 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.331334+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.331334+0000 mon.c (mon.1) 395 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.331789+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.331789+0000 mgr.y (mgr.44107) 355 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.335280+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.335280+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.337620+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.337620+0000 mon.c (mon.1) 396 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.338057+0000 mgr.y (mgr.44107) 356 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.338057+0000 mgr.y (mgr.44107) 356 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.341234+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.594 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.341234+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.343875+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.343875+0000 mon.c (mon.1) 397 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.344511+0000 mgr.y (mgr.44107) 357 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.344511+0000 mgr.y (mgr.44107) 357 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.348712+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.348712+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.350699+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.350699+0000 mon.c (mon.1) 398 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.351784+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.351784+0000 mon.c (mon.1) 399 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.352946+0000 mon.c (mon.1) 400 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.352946+0000 mon.c (mon.1) 400 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.354001+0000 mon.c (mon.1) 401 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.354001+0000 mon.c (mon.1) 401 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.355029+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.355029+0000 mon.c (mon.1) 402 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.356016+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.356016+0000 mon.c (mon.1) 403 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.356648+0000 mgr.y (mgr.44107) 358 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.356648+0000 mgr.y (mgr.44107) 358 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.357742+0000 mon.c (mon.1) 404 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.357742+0000 mon.c (mon.1) 404 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.358068+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.358068+0000 mon.a (mon.0) 583 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.361330+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.361330+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.363982+0000 mon.c (mon.1) 405 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.363982+0000 mon.c (mon.1) 405 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.364186+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.364186+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.366464+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.366464+0000 mon.a (mon.0) 586 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.369252+0000 mon.c (mon.1) 406 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.369252+0000 mon.c (mon.1) 406 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.369474+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.369474+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.372161+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.372161+0000 mon.a (mon.0) 588 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.375011+0000 mon.c (mon.1) 407 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.375011+0000 mon.c (mon.1) 407 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.375213+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.375213+0000 mon.a (mon.0) 589 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.377885+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.377885+0000 mon.a (mon.0) 590 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.381056+0000 mon.c (mon.1) 408 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.381056+0000 mon.c (mon.1) 408 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.381271+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.381271+0000 mon.a (mon.0) 591 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.383635+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.383635+0000 mon.a (mon.0) 592 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.386344+0000 mon.c (mon.1) 409 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.386344+0000 mon.c (mon.1) 409 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.386558+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.386558+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.388938+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.388938+0000 mon.a (mon.0) 594 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.391635+0000 mon.c (mon.1) 410 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.391635+0000 mon.c (mon.1) 410 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.391853+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.391853+0000 mon.a (mon.0) 595 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.394351+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:04.595 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.394351+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.397071+0000 mon.c (mon.1) 411 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.397071+0000 mon.c (mon.1) 411 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.397279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.397279+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.398353+0000 mon.c (mon.1) 412 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.398353+0000 mon.c (mon.1) 412 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.398557+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.398557+0000 mon.a (mon.0) 598 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.401026+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.401026+0000 mon.a (mon.0) 599 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.403596+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.403596+0000 mon.c (mon.1) 413 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.403826+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.403826+0000 mon.a (mon.0) 600 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.404994+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.404994+0000 mon.c (mon.1) 414 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.405225+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.405225+0000 mon.a (mon.0) 601 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.407774+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.407774+0000 mon.a (mon.0) 602 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.411638+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.411638+0000 mon.c (mon.1) 415 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.411865+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.411865+0000 mon.a (mon.0) 603 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.414186+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.414186+0000 mon.a (mon.0) 604 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.417214+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.417214+0000 mon.c (mon.1) 416 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.417402+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.417402+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.418402+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.418402+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.418578+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.418578+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.419520+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.419520+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.419798+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.419798+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.420374+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.420374+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.420551+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.420551+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421078+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421078+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421236+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421236+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421733+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421733+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421894+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.421894+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.422366+0000 mgr.y (mgr.44107) 359 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.422366+0000 mgr.y (mgr.44107) 359 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.422685+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.422685+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.422836+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.596 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.422836+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.426415+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.426415+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.426757+0000 mon.c (mon.1) 423 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.426757+0000 mon.c (mon.1) 423 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.427868+0000 mon.c (mon.1) 424 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.427868+0000 mon.c (mon.1) 424 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.428356+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.428356+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.430951+0000 mgr.y (mgr.44107) 360 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cephadm 2026-03-10T11:52:03.430951+0000 mgr.y (mgr.44107) 360 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cluster 2026-03-10T11:52:03.477911+0000 mgr.y (mgr.44107) 361 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 88 KiB/s rd, 85 B/s wr, 136 op/s 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: cluster 2026-03-10T11:52:03.477911+0000 mgr.y (mgr.44107) 361 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 88 KiB/s rd, 85 B/s wr, 136 op/s 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.586397+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.586397+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.630320+0000 mon.c (mon.1) 426 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.630320+0000 mon.c (mon.1) 426 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.631924+0000 mon.c (mon.1) 427 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.631924+0000 mon.c (mon.1) 427 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.632803+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.632803+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.637781+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:04 vm05 bash[68966]: audit 2026-03-10T11:52:03.637781+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.417402+0000 mon.a (mon.0) 605 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.418402+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.418402+0000 mon.c (mon.1) 417 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.418578+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.418578+0000 mon.a (mon.0) 606 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.419520+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.419520+0000 mon.c (mon.1) 418 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.419798+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.419798+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.420374+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.420374+0000 mon.c (mon.1) 419 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.420551+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.420551+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421078+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421078+0000 mon.c (mon.1) 420 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421236+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421236+0000 mon.a (mon.0) 609 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421733+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421733+0000 mon.c (mon.1) 421 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421894+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.421894+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.422366+0000 mgr.y (mgr.44107) 359 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.422366+0000 mgr.y (mgr.44107) 359 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.422685+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.422685+0000 mon.c (mon.1) 422 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.422836+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.422836+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.426415+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.426415+0000 mon.a (mon.0) 612 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.426757+0000 mon.c (mon.1) 423 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.426757+0000 mon.c (mon.1) 423 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.427868+0000 mon.c (mon.1) 424 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.427868+0000 mon.c (mon.1) 424 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.597 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.428356+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.428356+0000 mon.c (mon.1) 425 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.430951+0000 mgr.y (mgr.44107) 360 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cephadm 2026-03-10T11:52:03.430951+0000 mgr.y (mgr.44107) 360 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cluster 2026-03-10T11:52:03.477911+0000 mgr.y (mgr.44107) 361 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 88 KiB/s rd, 85 B/s wr, 136 op/s 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: cluster 2026-03-10T11:52:03.477911+0000 mgr.y (mgr.44107) 361 : cluster [DBG] pgmap v181: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 88 KiB/s rd, 85 B/s wr, 136 op/s 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.586397+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.586397+0000 mon.a (mon.0) 613 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.630320+0000 mon.c (mon.1) 426 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.630320+0000 mon.c (mon.1) 426 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.631924+0000 mon.c (mon.1) 427 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.631924+0000 mon.c (mon.1) 427 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.632803+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.632803+0000 mon.c (mon.1) 428 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.637781+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:04.598 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:04 vm05 bash[65415]: audit 2026-03-10T11:52:03.637781+0000 mon.a (mon.0) 614 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:05.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:05 vm05 bash[65415]: audit 2026-03-10T11:52:05.492343+0000 mon.c (mon.1) 429 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:05.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:05 vm05 bash[65415]: audit 2026-03-10T11:52:05.492343+0000 mon.c (mon.1) 429 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:05.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:05 vm05 bash[68966]: audit 2026-03-10T11:52:05.492343+0000 mon.c (mon.1) 429 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:05.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:05 vm05 bash[68966]: audit 2026-03-10T11:52:05.492343+0000 mon.c (mon.1) 429 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:05.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:05 vm07 bash[46158]: audit 2026-03-10T11:52:05.492343+0000 mon.c (mon.1) 429 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:05.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:05 vm07 bash[46158]: audit 2026-03-10T11:52:05.492343+0000 mon.c (mon.1) 429 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:06 vm07 bash[46158]: cluster 2026-03-10T11:52:05.478372+0000 mgr.y (mgr.44107) 362 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 85 B/s wr, 95 op/s 2026-03-10T11:52:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:06 vm07 bash[46158]: cluster 2026-03-10T11:52:05.478372+0000 mgr.y (mgr.44107) 362 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 85 B/s wr, 95 op/s 2026-03-10T11:52:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:06 vm07 bash[46158]: audit 2026-03-10T11:52:05.912781+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:07.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:06 vm07 bash[46158]: audit 2026-03-10T11:52:05.912781+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:07.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:06 vm05 bash[65415]: cluster 2026-03-10T11:52:05.478372+0000 mgr.y (mgr.44107) 362 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 85 B/s wr, 95 op/s 2026-03-10T11:52:07.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:06 vm05 bash[65415]: cluster 2026-03-10T11:52:05.478372+0000 mgr.y (mgr.44107) 362 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 85 B/s wr, 95 op/s 2026-03-10T11:52:07.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:06 vm05 bash[65415]: audit 2026-03-10T11:52:05.912781+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:07.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:06 vm05 bash[65415]: audit 2026-03-10T11:52:05.912781+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:07.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:06 vm05 bash[68966]: cluster 2026-03-10T11:52:05.478372+0000 mgr.y (mgr.44107) 362 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 85 B/s wr, 95 op/s 2026-03-10T11:52:07.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:06 vm05 bash[68966]: cluster 2026-03-10T11:52:05.478372+0000 mgr.y (mgr.44107) 362 : cluster [DBG] pgmap v182: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 61 KiB/s rd, 85 B/s wr, 95 op/s 2026-03-10T11:52:07.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:06 vm05 bash[68966]: audit 2026-03-10T11:52:05.912781+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:07.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:06 vm05 bash[68966]: audit 2026-03-10T11:52:05.912781+0000 mon.a (mon.0) 615 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:08 vm07 bash[46158]: cluster 2026-03-10T11:52:07.478820+0000 mgr.y (mgr.44107) 363 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 85 B/s wr, 44 op/s 2026-03-10T11:52:09.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:08 vm07 bash[46158]: cluster 2026-03-10T11:52:07.478820+0000 mgr.y (mgr.44107) 363 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 85 B/s wr, 44 op/s 2026-03-10T11:52:09.208 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:08 vm05 bash[65415]: cluster 2026-03-10T11:52:07.478820+0000 mgr.y (mgr.44107) 363 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 85 B/s wr, 44 op/s 2026-03-10T11:52:09.208 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:08 vm05 bash[65415]: cluster 2026-03-10T11:52:07.478820+0000 mgr.y (mgr.44107) 363 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 85 B/s wr, 44 op/s 2026-03-10T11:52:09.208 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:08 vm05 bash[68966]: cluster 2026-03-10T11:52:07.478820+0000 mgr.y (mgr.44107) 363 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 85 B/s wr, 44 op/s 2026-03-10T11:52:09.208 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:08 vm05 bash[68966]: cluster 2026-03-10T11:52:07.478820+0000 mgr.y (mgr.44107) 363 : cluster [DBG] pgmap v183: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 85 B/s wr, 44 op/s 2026-03-10T11:52:09.208 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:52:08] "GET /metrics HTTP/1.1" 200 37924 "" "Prometheus/2.51.0" 2026-03-10T11:52:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:10 vm07 bash[46158]: audit 2026-03-10T11:52:09.213187+0000 mgr.y (mgr.44107) 364 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:10 vm07 bash[46158]: audit 2026-03-10T11:52:09.213187+0000 mgr.y (mgr.44107) 364 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:10 vm07 bash[46158]: cluster 2026-03-10T11:52:09.479178+0000 mgr.y (mgr.44107) 365 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:11.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:10 vm07 bash[46158]: cluster 2026-03-10T11:52:09.479178+0000 mgr.y (mgr.44107) 365 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:10 vm05 bash[65415]: audit 2026-03-10T11:52:09.213187+0000 mgr.y (mgr.44107) 364 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:10 vm05 bash[65415]: audit 2026-03-10T11:52:09.213187+0000 mgr.y (mgr.44107) 364 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:10 vm05 bash[65415]: cluster 2026-03-10T11:52:09.479178+0000 mgr.y (mgr.44107) 365 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:10 vm05 bash[65415]: cluster 2026-03-10T11:52:09.479178+0000 mgr.y (mgr.44107) 365 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:10 vm05 bash[68966]: audit 2026-03-10T11:52:09.213187+0000 mgr.y (mgr.44107) 364 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:10 vm05 bash[68966]: audit 2026-03-10T11:52:09.213187+0000 mgr.y (mgr.44107) 364 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:10 vm05 bash[68966]: cluster 2026-03-10T11:52:09.479178+0000 mgr.y (mgr.44107) 365 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:11.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:10 vm05 bash[68966]: cluster 2026-03-10T11:52:09.479178+0000 mgr.y (mgr.44107) 365 : cluster [DBG] pgmap v184: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:12 vm07 bash[46158]: cluster 2026-03-10T11:52:11.479501+0000 mgr.y (mgr.44107) 366 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:13.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:12 vm07 bash[46158]: cluster 2026-03-10T11:52:11.479501+0000 mgr.y (mgr.44107) 366 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:13.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:12 vm05 bash[65415]: cluster 2026-03-10T11:52:11.479501+0000 mgr.y (mgr.44107) 366 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:13.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:12 vm05 bash[65415]: cluster 2026-03-10T11:52:11.479501+0000 mgr.y (mgr.44107) 366 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:13.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:12 vm05 bash[68966]: cluster 2026-03-10T11:52:11.479501+0000 mgr.y (mgr.44107) 366 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:13.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:12 vm05 bash[68966]: cluster 2026-03-10T11:52:11.479501+0000 mgr.y (mgr.44107) 366 : cluster [DBG] pgmap v185: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:14 vm07 bash[46158]: cluster 2026-03-10T11:52:13.479904+0000 mgr.y (mgr.44107) 367 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:15.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:14 vm07 bash[46158]: cluster 2026-03-10T11:52:13.479904+0000 mgr.y (mgr.44107) 367 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:15.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:14 vm05 bash[65415]: cluster 2026-03-10T11:52:13.479904+0000 mgr.y (mgr.44107) 367 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:15.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:14 vm05 bash[65415]: cluster 2026-03-10T11:52:13.479904+0000 mgr.y (mgr.44107) 367 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:15.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:14 vm05 bash[68966]: cluster 2026-03-10T11:52:13.479904+0000 mgr.y (mgr.44107) 367 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:15.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:14 vm05 bash[68966]: cluster 2026-03-10T11:52:13.479904+0000 mgr.y (mgr.44107) 367 : cluster [DBG] pgmap v186: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 2 op/s 2026-03-10T11:52:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:16 vm07 bash[46158]: cluster 2026-03-10T11:52:15.480355+0000 mgr.y (mgr.44107) 368 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:17.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:16 vm07 bash[46158]: cluster 2026-03-10T11:52:15.480355+0000 mgr.y (mgr.44107) 368 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:17.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:16 vm05 bash[65415]: cluster 2026-03-10T11:52:15.480355+0000 mgr.y (mgr.44107) 368 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:17.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:16 vm05 bash[65415]: cluster 2026-03-10T11:52:15.480355+0000 mgr.y (mgr.44107) 368 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:16 vm05 bash[68966]: cluster 2026-03-10T11:52:15.480355+0000 mgr.y (mgr.44107) 368 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:17.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:16 vm05 bash[68966]: cluster 2026-03-10T11:52:15.480355+0000 mgr.y (mgr.44107) 368 : cluster [DBG] pgmap v187: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 3.1 KiB/s rd, 3 op/s 2026-03-10T11:52:18.374 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (18m) 21s ago 25m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (6m) 21s ago 25m 66.9M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (6m) 21s ago 25m 44.6M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (6m) 21s ago 28m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (15m) 21s ago 29m 537M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (4m) 21s ago 29m 55.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (5m) 21s ago 28m 50.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (5m) 21s ago 28m 50.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (18m) 21s ago 25m 8435k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (18m) 21s ago 25m 7919k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 21s ago 28m 54.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (2m) 21s ago 27m 53.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (3m) 21s ago 27m 51.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (3m) 21s ago 27m 74.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (109s) 21s ago 27m 53.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (94s) 21s ago 26m 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (78s) 21s ago 26m 68.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (63s) 21s ago 26m 69.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e86e1860ea0d 2026-03-10T11:52:18.791 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (6m) 21s ago 25m 45.5M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:52:18.792 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (28s) 21s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e 41b55296180b 2026-03-10T11:52:18.792 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (27s) 21s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e 8f8f41b99bda 2026-03-10T11:52:18.838 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.rgw | length == 1'"'"'' 2026-03-10T11:52:19.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:18 vm05 bash[68966]: cluster 2026-03-10T11:52:17.480793+0000 mgr.y (mgr.44107) 369 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-10T11:52:19.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:18 vm05 bash[68966]: cluster 2026-03-10T11:52:17.480793+0000 mgr.y (mgr.44107) 369 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-10T11:52:19.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:18 vm05 bash[65415]: cluster 2026-03-10T11:52:17.480793+0000 mgr.y (mgr.44107) 369 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-10T11:52:19.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:18 vm05 bash[65415]: cluster 2026-03-10T11:52:17.480793+0000 mgr.y (mgr.44107) 369 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-10T11:52:19.090 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:52:18] "GET /metrics HTTP/1.1" 200 37924 "" "Prometheus/2.51.0" 2026-03-10T11:52:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:18 vm07 bash[46158]: cluster 2026-03-10T11:52:17.480793+0000 mgr.y (mgr.44107) 369 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-10T11:52:19.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:18 vm07 bash[46158]: cluster 2026-03-10T11:52:17.480793+0000 mgr.y (mgr.44107) 369 : cluster [DBG] pgmap v188: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 2.5 KiB/s rd, 2 op/s 2026-03-10T11:52:19.323 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:52:19.364 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.rgw | keys'"'"' | grep $sha1' 2026-03-10T11:52:19.853 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T11:52:19.907 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:18.306461+0000 mgr.y (mgr.44107) 370 : audit [DBG] from='client.54612 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:18.306461+0000 mgr.y (mgr.44107) 370 : audit [DBG] from='client.54612 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:18.792729+0000 mgr.y (mgr.44107) 371 : audit [DBG] from='client.44560 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:18.792729+0000 mgr.y (mgr.44107) 371 : audit [DBG] from='client.44560 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:19.317892+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.105:0/94477906' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:19.317892+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.105:0/94477906' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:19.847788+0000 mon.c (mon.1) 430 : audit [DBG] from='client.? 192.168.123.105:0/92468629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:19 vm05 bash[65415]: audit 2026-03-10T11:52:19.847788+0000 mon.c (mon.1) 430 : audit [DBG] from='client.? 192.168.123.105:0/92468629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:18.306461+0000 mgr.y (mgr.44107) 370 : audit [DBG] from='client.54612 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:18.306461+0000 mgr.y (mgr.44107) 370 : audit [DBG] from='client.54612 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:18.792729+0000 mgr.y (mgr.44107) 371 : audit [DBG] from='client.44560 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.112 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:18.792729+0000 mgr.y (mgr.44107) 371 : audit [DBG] from='client.44560 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.113 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:19.317892+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.105:0/94477906' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.113 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:19.317892+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.105:0/94477906' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.113 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:19.847788+0000 mon.c (mon.1) 430 : audit [DBG] from='client.? 192.168.123.105:0/92468629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.113 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:19 vm05 bash[68966]: audit 2026-03-10T11:52:19.847788+0000 mon.c (mon.1) 430 : audit [DBG] from='client.? 192.168.123.105:0/92468629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:52:20.441 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:18.306461+0000 mgr.y (mgr.44107) 370 : audit [DBG] from='client.54612 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:18.306461+0000 mgr.y (mgr.44107) 370 : audit [DBG] from='client.54612 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:18.792729+0000 mgr.y (mgr.44107) 371 : audit [DBG] from='client.44560 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:18.792729+0000 mgr.y (mgr.44107) 371 : audit [DBG] from='client.44560 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:19.317892+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.105:0/94477906' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:19.317892+0000 mon.a (mon.0) 616 : audit [DBG] from='client.? 192.168.123.105:0/94477906' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:19.847788+0000 mon.c (mon.1) 430 : audit [DBG] from='client.? 192.168.123.105:0/92468629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:19 vm07 bash[46158]: audit 2026-03-10T11:52:19.847788+0000 mon.c (mon.1) 430 : audit [DBG] from='client.? 192.168.123.105:0/92468629' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:20.515 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:52:20.995 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:20 vm05 bash[68966]: audit 2026-03-10T11:52:19.223316+0000 mgr.y (mgr.44107) 372 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:20 vm05 bash[68966]: audit 2026-03-10T11:52:19.223316+0000 mgr.y (mgr.44107) 372 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:20 vm05 bash[68966]: cluster 2026-03-10T11:52:19.481157+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:20 vm05 bash[68966]: cluster 2026-03-10T11:52:19.481157+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:20 vm05 bash[68966]: audit 2026-03-10T11:52:20.492599+0000 mon.c (mon.1) 431 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:20 vm05 bash[68966]: audit 2026-03-10T11:52:20.492599+0000 mon.c (mon.1) 431 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:20 vm05 bash[65415]: audit 2026-03-10T11:52:19.223316+0000 mgr.y (mgr.44107) 372 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:20 vm05 bash[65415]: audit 2026-03-10T11:52:19.223316+0000 mgr.y (mgr.44107) 372 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:20 vm05 bash[65415]: cluster 2026-03-10T11:52:19.481157+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:21.006 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:20 vm05 bash[65415]: cluster 2026-03-10T11:52:19.481157+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:21.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:20 vm05 bash[65415]: audit 2026-03-10T11:52:20.492599+0000 mon.c (mon.1) 431 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:21.007 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:20 vm05 bash[65415]: audit 2026-03-10T11:52:20.492599+0000 mon.c (mon.1) 431 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:21.046 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade start --image quay.ceph.io/ceph-ci/ceph:$sha1' 2026-03-10T11:52:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:20 vm07 bash[46158]: audit 2026-03-10T11:52:19.223316+0000 mgr.y (mgr.44107) 372 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:20 vm07 bash[46158]: audit 2026-03-10T11:52:19.223316+0000 mgr.y (mgr.44107) 372 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:20 vm07 bash[46158]: cluster 2026-03-10T11:52:19.481157+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:20 vm07 bash[46158]: cluster 2026-03-10T11:52:19.481157+0000 mgr.y (mgr.44107) 373 : cluster [DBG] pgmap v189: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:20 vm07 bash[46158]: audit 2026-03-10T11:52:20.492599+0000 mon.c (mon.1) 431 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:20 vm07 bash[46158]: audit 2026-03-10T11:52:20.492599+0000 mon.c (mon.1) 431 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:21.472 INFO:teuthology.orchestra.run.vm05.stdout:Initiating upgrade to quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:21.535 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T11:52:21.537 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm05.local 2026-03-10T11:52:21.537 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'while ceph orch upgrade status | jq '"'"'.in_progress'"'"' | grep true && ! ceph orch upgrade status | jq '"'"'.message'"'"' | grep Error ; do ceph orch ps ; ceph versions ; ceph orch upgrade status ; ceph health detail ; sleep 30 ; done' 2026-03-10T11:52:22.024 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:52:22.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:20.445891+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.44578 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:22.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:20.445891+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.44578 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.001045+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/2404015617' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.001045+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/2404015617' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.473685+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.473685+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.476113+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.476113+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.477307+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.477307+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.477803+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.477803+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.482438+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:21 vm05 bash[65415]: audit 2026-03-10T11:52:21.482438+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:20.445891+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.44578 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:20.445891+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.44578 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.001045+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/2404015617' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.001045+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/2404015617' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.473685+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.473685+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.476113+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.476113+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.477307+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.477307+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.477803+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.477803+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.482438+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:21 vm05 bash[68966]: audit 2026-03-10T11:52:21.482438+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (18m) 25s ago 25m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (6m) 25s ago 25m 66.9M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (6m) 25s ago 25m 44.6M - 3.5 e1d6a67b021e 5fb8678f46ba 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (6m) 25s ago 28m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (15m) 25s ago 29m 537M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (4m) 25s ago 29m 55.8M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (5m) 25s ago 28m 50.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (5m) 25s ago 28m 50.4M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (18m) 25s ago 26m 8435k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (18m) 25s ago 26m 7919k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (2m) 25s ago 28m 54.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (2m) 25s ago 27m 53.8M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (3m) 25s ago 27m 51.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (3m) 25s ago 27m 74.5M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:52:22.404 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (113s) 25s ago 27m 53.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:52:22.405 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (97s) 25s ago 26m 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:52:22.405 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (82s) 25s ago 26m 68.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:52:22.405 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (66s) 25s ago 26m 69.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e86e1860ea0d 2026-03-10T11:52:22.405 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (6m) 25s ago 25m 45.5M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:52:22.405 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (32s) 25s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e 41b55296180b 2026-03-10T11:52:22.405 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (30s) 25s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e 8f8f41b99bda 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:20.445891+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.44578 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:20.445891+0000 mgr.y (mgr.44107) 374 : audit [DBG] from='client.44578 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.001045+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/2404015617' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.001045+0000 mon.a (mon.0) 617 : audit [DBG] from='client.? 192.168.123.105:0/2404015617' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.473685+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.473685+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.476113+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.476113+0000 mon.c (mon.1) 432 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.477307+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.477307+0000 mon.c (mon.1) 433 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.477803+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.477803+0000 mon.c (mon.1) 434 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.482438+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:21 vm07 bash[46158]: audit 2026-03-10T11:52:21.482438+0000 mon.a (mon.0) 619 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:52:22.633 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:22.634 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:52:22.634 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 15 2026-03-10T11:52:22.634 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:52:22.634 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": true, 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "which": "Upgrading all daemon types on all hosts", 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "progress": "", 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "message": "Doing first pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df image", 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:52:22.860 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:52:23.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:21.468769+0000 mgr.y (mgr.44107) 375 : audit [DBG] from='client.44590 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:21.468769+0000 mgr.y (mgr.44107) 375 : audit [DBG] from='client.44590 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: cephadm 2026-03-10T11:52:21.469184+0000 mgr.y (mgr.44107) 376 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: cephadm 2026-03-10T11:52:21.469184+0000 mgr.y (mgr.44107) 376 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: cluster 2026-03-10T11:52:21.481505+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: cluster 2026-03-10T11:52:21.481505+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: cephadm 2026-03-10T11:52:21.537196+0000 mgr.y (mgr.44107) 378 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: cephadm 2026-03-10T11:52:21.537196+0000 mgr.y (mgr.44107) 378 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:22.019278+0000 mgr.y (mgr.44107) 379 : audit [DBG] from='client.44596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:22.019278+0000 mgr.y (mgr.44107) 379 : audit [DBG] from='client.44596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:22.638248+0000 mon.c (mon.1) 435 : audit [DBG] from='client.? 192.168.123.105:0/2129301839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:22.638248+0000 mon.c (mon.1) 435 : audit [DBG] from='client.? 192.168.123.105:0/2129301839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:22.973371+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:22 vm05 bash[65415]: audit 2026-03-10T11:52:22.973371+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:21.468769+0000 mgr.y (mgr.44107) 375 : audit [DBG] from='client.44590 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:21.468769+0000 mgr.y (mgr.44107) 375 : audit [DBG] from='client.44590 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: cephadm 2026-03-10T11:52:21.469184+0000 mgr.y (mgr.44107) 376 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: cephadm 2026-03-10T11:52:21.469184+0000 mgr.y (mgr.44107) 376 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: cluster 2026-03-10T11:52:21.481505+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: cluster 2026-03-10T11:52:21.481505+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: cephadm 2026-03-10T11:52:21.537196+0000 mgr.y (mgr.44107) 378 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: cephadm 2026-03-10T11:52:21.537196+0000 mgr.y (mgr.44107) 378 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:22.019278+0000 mgr.y (mgr.44107) 379 : audit [DBG] from='client.44596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:22.019278+0000 mgr.y (mgr.44107) 379 : audit [DBG] from='client.44596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:22.638248+0000 mon.c (mon.1) 435 : audit [DBG] from='client.? 192.168.123.105:0/2129301839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:22.638248+0000 mon.c (mon.1) 435 : audit [DBG] from='client.? 192.168.123.105:0/2129301839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:22.973371+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:23.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:22 vm05 bash[68966]: audit 2026-03-10T11:52:22.973371+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:23.160 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:21.468769+0000 mgr.y (mgr.44107) 375 : audit [DBG] from='client.44590 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:21.468769+0000 mgr.y (mgr.44107) 375 : audit [DBG] from='client.44590 -' entity='client.admin' cmd=[{"prefix": "orch upgrade start", "image": "quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: cephadm 2026-03-10T11:52:21.469184+0000 mgr.y (mgr.44107) 376 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: cephadm 2026-03-10T11:52:21.469184+0000 mgr.y (mgr.44107) 376 : cephadm [INF] Upgrade: Started with target quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: cluster 2026-03-10T11:52:21.481505+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: cluster 2026-03-10T11:52:21.481505+0000 mgr.y (mgr.44107) 377 : cluster [DBG] pgmap v190: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: cephadm 2026-03-10T11:52:21.537196+0000 mgr.y (mgr.44107) 378 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: cephadm 2026-03-10T11:52:21.537196+0000 mgr.y (mgr.44107) 378 : cephadm [INF] Upgrade: First pull of quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:22.019278+0000 mgr.y (mgr.44107) 379 : audit [DBG] from='client.44596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:22.019278+0000 mgr.y (mgr.44107) 379 : audit [DBG] from='client.44596 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:22.638248+0000 mon.c (mon.1) 435 : audit [DBG] from='client.? 192.168.123.105:0/2129301839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:22.638248+0000 mon.c (mon.1) 435 : audit [DBG] from='client.? 192.168.123.105:0/2129301839' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:22.973371+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:22 vm07 bash[46158]: audit 2026-03-10T11:52:22.973371+0000 mon.a (mon.0) 620 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.215570+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.44602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.215570+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.44602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.405597+0000 mgr.y (mgr.44107) 381 : audit [DBG] from='client.44608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.405597+0000 mgr.y (mgr.44107) 381 : audit [DBG] from='client.44608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.864132+0000 mgr.y (mgr.44107) 382 : audit [DBG] from='client.54654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.864132+0000 mgr.y (mgr.44107) 382 : audit [DBG] from='client.54654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.976107+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.976107+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.976134+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.976134+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.977117+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.977117+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.978607+0000 mon.c (mon.1) 437 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.978607+0000 mon.c (mon.1) 437 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.986925+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.986925+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.990994+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.990994+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.995722+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.995722+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.996188+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:52:24.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:22.996188+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.999481+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:22.999481+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.003432+0000 mon.c (mon.1) 439 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.003432+0000 mon.c (mon.1) 439 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.004046+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.004046+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.006764+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.006764+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.011710+0000 mon.c (mon.1) 440 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.011710+0000 mon.c (mon.1) 440 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.012382+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.012382+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.016004+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.016004+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.021142+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.021142+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.021789+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.021789+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.025009+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.025009+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.029933+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.029933+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.030570+0000 mgr.y (mgr.44107) 390 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.030570+0000 mgr.y (mgr.44107) 390 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.033959+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.033959+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.038129+0000 mon.c (mon.1) 443 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.038129+0000 mon.c (mon.1) 443 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.038755+0000 mgr.y (mgr.44107) 391 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.038755+0000 mgr.y (mgr.44107) 391 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.041650+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.041650+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.045593+0000 mon.c (mon.1) 444 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.045593+0000 mon.c (mon.1) 444 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.046199+0000 mgr.y (mgr.44107) 392 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.046199+0000 mgr.y (mgr.44107) 392 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.047324+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.047324+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.047924+0000 mgr.y (mgr.44107) 393 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.047924+0000 mgr.y (mgr.44107) 393 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.050744+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.050744+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.161101+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.105:0/1350206109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.161101+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.105:0/1350206109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.448767+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.448767+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.453199+0000 mon.c (mon.1) 446 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.453199+0000 mon.c (mon.1) 446 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.453573+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.453573+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.457244+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:23 vm05 bash[65415]: audit 2026-03-10T11:52:23.457244+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.215570+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.44602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.215570+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.44602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.341 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.405597+0000 mgr.y (mgr.44107) 381 : audit [DBG] from='client.44608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.405597+0000 mgr.y (mgr.44107) 381 : audit [DBG] from='client.44608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.864132+0000 mgr.y (mgr.44107) 382 : audit [DBG] from='client.54654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.864132+0000 mgr.y (mgr.44107) 382 : audit [DBG] from='client.54654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.976107+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.976107+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.976134+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.976134+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.977117+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.977117+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.978607+0000 mon.c (mon.1) 437 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.978607+0000 mon.c (mon.1) 437 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.986925+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.986925+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.990994+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.990994+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.995722+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.995722+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.996188+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:22.996188+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.999481+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:22.999481+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.003432+0000 mon.c (mon.1) 439 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.003432+0000 mon.c (mon.1) 439 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.004046+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.004046+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.006764+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.006764+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.011710+0000 mon.c (mon.1) 440 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.011710+0000 mon.c (mon.1) 440 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.012382+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.012382+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.016004+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.016004+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.021142+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.021142+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.021789+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.021789+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.025009+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.025009+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.029933+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.029933+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.030570+0000 mgr.y (mgr.44107) 390 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.030570+0000 mgr.y (mgr.44107) 390 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.033959+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.033959+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.038129+0000 mon.c (mon.1) 443 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.038129+0000 mon.c (mon.1) 443 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.038755+0000 mgr.y (mgr.44107) 391 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.038755+0000 mgr.y (mgr.44107) 391 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.041650+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.041650+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.045593+0000 mon.c (mon.1) 444 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.045593+0000 mon.c (mon.1) 444 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.046199+0000 mgr.y (mgr.44107) 392 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.046199+0000 mgr.y (mgr.44107) 392 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.047324+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.047324+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.342 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.047924+0000 mgr.y (mgr.44107) 393 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.047924+0000 mgr.y (mgr.44107) 393 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.050744+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.050744+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.161101+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.105:0/1350206109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.161101+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.105:0/1350206109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.448767+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.448767+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.453199+0000 mon.c (mon.1) 446 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.453199+0000 mon.c (mon.1) 446 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.453573+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.453573+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.457244+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:23 vm05 bash[68966]: audit 2026-03-10T11:52:23.457244+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:24.343 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.343 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.343 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.343 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.343 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.343 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:52:23 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.215570+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.44602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.215570+0000 mgr.y (mgr.44107) 380 : audit [DBG] from='client.44602 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.405597+0000 mgr.y (mgr.44107) 381 : audit [DBG] from='client.44608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.405597+0000 mgr.y (mgr.44107) 381 : audit [DBG] from='client.44608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.864132+0000 mgr.y (mgr.44107) 382 : audit [DBG] from='client.54654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.864132+0000 mgr.y (mgr.44107) 382 : audit [DBG] from='client.54654 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.976107+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.976107+0000 mgr.y (mgr.44107) 383 : cephadm [INF] Upgrade: Target is version 19.2.3-678-ge911bdeb (squid) 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.976134+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.976134+0000 mgr.y (mgr.44107) 384 : cephadm [INF] Upgrade: Target container is quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc, digests ['quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc'] 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.977117+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.977117+0000 mon.c (mon.1) 436 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.978607+0000 mon.c (mon.1) 437 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.978607+0000 mon.c (mon.1) 437 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.986925+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:52:24.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.986925+0000 mgr.y (mgr.44107) 385 : cephadm [INF] Upgrade: Setting container_image for all mgr 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.990994+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.990994+0000 mon.a (mon.0) 621 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.995722+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.995722+0000 mon.c (mon.1) 438 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.996188+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:22.996188+0000 mgr.y (mgr.44107) 386 : cephadm [INF] Upgrade: Setting container_image for all mon 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.999481+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:22.999481+0000 mon.a (mon.0) 622 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.003432+0000 mon.c (mon.1) 439 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.003432+0000 mon.c (mon.1) 439 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.004046+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.004046+0000 mgr.y (mgr.44107) 387 : cephadm [INF] Upgrade: Setting container_image for all crash 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.006764+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.006764+0000 mon.a (mon.0) 623 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.011710+0000 mon.c (mon.1) 440 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.011710+0000 mon.c (mon.1) 440 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.012382+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.012382+0000 mgr.y (mgr.44107) 388 : cephadm [INF] Upgrade: Setting container_image for all osd 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.016004+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.016004+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.021142+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.021142+0000 mon.c (mon.1) 441 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.021789+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.021789+0000 mgr.y (mgr.44107) 389 : cephadm [INF] Upgrade: Setting container_image for all mds 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.025009+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.025009+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.029933+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.029933+0000 mon.c (mon.1) 442 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.030570+0000 mgr.y (mgr.44107) 390 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.030570+0000 mgr.y (mgr.44107) 390 : cephadm [INF] Upgrade: Setting container_image for all rgw 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.033959+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.033959+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.038129+0000 mon.c (mon.1) 443 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.038129+0000 mon.c (mon.1) 443 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.038755+0000 mgr.y (mgr.44107) 391 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.038755+0000 mgr.y (mgr.44107) 391 : cephadm [INF] Upgrade: Setting container_image for all rbd-mirror 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.041650+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.041650+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.045593+0000 mon.c (mon.1) 444 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.045593+0000 mon.c (mon.1) 444 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.046199+0000 mgr.y (mgr.44107) 392 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.046199+0000 mgr.y (mgr.44107) 392 : cephadm [INF] Upgrade: Setting container_image for all cephfs-mirror 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.047324+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.047324+0000 mon.c (mon.1) 445 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.047924+0000 mgr.y (mgr.44107) 393 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.047924+0000 mgr.y (mgr.44107) 393 : cephadm [INF] Upgrade: Setting container_image for all ceph-exporter 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.050744+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.050744+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.161101+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.105:0/1350206109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.161101+0000 mon.b (mon.2) 37 : audit [DBG] from='client.? 192.168.123.105:0/1350206109' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.448767+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.447 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.448767+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.453199+0000 mon.c (mon.1) 446 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.453199+0000 mon.c (mon.1) 446 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.453573+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:23 vm07 bash[46158]: audit 2026-03-10T11:52:23.453573+0000 mon.a (mon.0) 630 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.foo.vm05.txapnk", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-10T11:52:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: audit 2026-03-10T11:52:23.457244+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:24.448 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: audit 2026-03-10T11:52:23.457244+0000 mon.c (mon.1) 447 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:25.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:24 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.444106+0000 mgr.y (mgr.44107) 394 : cephadm [INF] Upgrade: Updating iscsi.foo.vm05.txapnk 2026-03-10T11:52:25.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:24 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.444106+0000 mgr.y (mgr.44107) 394 : cephadm [INF] Upgrade: Updating iscsi.foo.vm05.txapnk 2026-03-10T11:52:25.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:24 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.458108+0000 mgr.y (mgr.44107) 395 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:24 vm05 bash[65415]: cephadm 2026-03-10T11:52:23.458108+0000 mgr.y (mgr.44107) 395 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:24 vm05 bash[65415]: cluster 2026-03-10T11:52:23.481907+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:24 vm05 bash[65415]: cluster 2026-03-10T11:52:23.481907+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:24 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.444106+0000 mgr.y (mgr.44107) 394 : cephadm [INF] Upgrade: Updating iscsi.foo.vm05.txapnk 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:24 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.444106+0000 mgr.y (mgr.44107) 394 : cephadm [INF] Upgrade: Updating iscsi.foo.vm05.txapnk 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:24 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.458108+0000 mgr.y (mgr.44107) 395 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:24 vm05 bash[68966]: cephadm 2026-03-10T11:52:23.458108+0000 mgr.y (mgr.44107) 395 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:24 vm05 bash[68966]: cluster 2026-03-10T11:52:23.481907+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:25.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:24 vm05 bash[68966]: cluster 2026-03-10T11:52:23.481907+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.444106+0000 mgr.y (mgr.44107) 394 : cephadm [INF] Upgrade: Updating iscsi.foo.vm05.txapnk 2026-03-10T11:52:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.444106+0000 mgr.y (mgr.44107) 394 : cephadm [INF] Upgrade: Updating iscsi.foo.vm05.txapnk 2026-03-10T11:52:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.458108+0000 mgr.y (mgr.44107) 395 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:52:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: cephadm 2026-03-10T11:52:23.458108+0000 mgr.y (mgr.44107) 395 : cephadm [INF] Deploying daemon iscsi.foo.vm05.txapnk on vm05 2026-03-10T11:52:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: cluster 2026-03-10T11:52:23.481907+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:24 vm07 bash[46158]: cluster 2026-03-10T11:52:23.481907+0000 mgr.y (mgr.44107) 396 : cluster [DBG] pgmap v191: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:27.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:26 vm05 bash[65415]: cluster 2026-03-10T11:52:25.482262+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:27.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:26 vm05 bash[65415]: cluster 2026-03-10T11:52:25.482262+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:27.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:26 vm05 bash[68966]: cluster 2026-03-10T11:52:25.482262+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:27.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:26 vm05 bash[68966]: cluster 2026-03-10T11:52:25.482262+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:26 vm07 bash[46158]: cluster 2026-03-10T11:52:25.482262+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:26 vm07 bash[46158]: cluster 2026-03-10T11:52:25.482262+0000 mgr.y (mgr.44107) 397 : cluster [DBG] pgmap v192: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:29.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:28 vm05 bash[68966]: cluster 2026-03-10T11:52:27.482719+0000 mgr.y (mgr.44107) 398 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:29.227 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:28 vm05 bash[68966]: cluster 2026-03-10T11:52:27.482719+0000 mgr.y (mgr.44107) 398 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:29.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:28 vm05 bash[65415]: cluster 2026-03-10T11:52:27.482719+0000 mgr.y (mgr.44107) 398 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:29.227 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:28 vm05 bash[65415]: cluster 2026-03-10T11:52:27.482719+0000 mgr.y (mgr.44107) 398 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:29.228 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:52:28] "GET /metrics HTTP/1.1" 200 37924 "" "Prometheus/2.51.0" 2026-03-10T11:52:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:28 vm07 bash[46158]: cluster 2026-03-10T11:52:27.482719+0000 mgr.y (mgr.44107) 398 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:28 vm07 bash[46158]: cluster 2026-03-10T11:52:27.482719+0000 mgr.y (mgr.44107) 398 : cluster [DBG] pgmap v193: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:31.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:31 vm05 bash[68966]: audit 2026-03-10T11:52:29.232565+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:31.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:31 vm05 bash[68966]: audit 2026-03-10T11:52:29.232565+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:31.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:31 vm05 bash[68966]: cluster 2026-03-10T11:52:29.483106+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:31.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:31 vm05 bash[68966]: cluster 2026-03-10T11:52:29.483106+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:31.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:31 vm05 bash[65415]: audit 2026-03-10T11:52:29.232565+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:31.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:31 vm05 bash[65415]: audit 2026-03-10T11:52:29.232565+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:31.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:31 vm05 bash[65415]: cluster 2026-03-10T11:52:29.483106+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:31.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:31 vm05 bash[65415]: cluster 2026-03-10T11:52:29.483106+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:30 vm07 bash[46158]: audit 2026-03-10T11:52:29.232565+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:30 vm07 bash[46158]: audit 2026-03-10T11:52:29.232565+0000 mgr.y (mgr.44107) 399 : audit [DBG] from='client.15198 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:31 vm07 bash[46158]: cluster 2026-03-10T11:52:29.483106+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:31 vm07 bash[46158]: cluster 2026-03-10T11:52:29.483106+0000 mgr.y (mgr.44107) 400 : cluster [DBG] pgmap v194: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:33.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:33 vm05 bash[68966]: cluster 2026-03-10T11:52:31.483483+0000 mgr.y (mgr.44107) 401 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:33.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:33 vm05 bash[68966]: cluster 2026-03-10T11:52:31.483483+0000 mgr.y (mgr.44107) 401 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:33.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:33 vm05 bash[65415]: cluster 2026-03-10T11:52:31.483483+0000 mgr.y (mgr.44107) 401 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:33.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:33 vm05 bash[65415]: cluster 2026-03-10T11:52:31.483483+0000 mgr.y (mgr.44107) 401 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:33 vm07 bash[46158]: cluster 2026-03-10T11:52:31.483483+0000 mgr.y (mgr.44107) 401 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:33 vm07 bash[46158]: cluster 2026-03-10T11:52:31.483483+0000 mgr.y (mgr.44107) 401 : cluster [DBG] pgmap v195: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:34.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:34.340 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:52:34 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:52:35.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: cluster 2026-03-10T11:52:33.483855+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: cluster 2026-03-10T11:52:33.483855+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.380500+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.380500+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.393280+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.393280+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.401148+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.401148+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.922038+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.105:0/1830656990' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:35 vm05 bash[65415]: audit 2026-03-10T11:52:34.922038+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.105:0/1830656990' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: cluster 2026-03-10T11:52:33.483855+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: cluster 2026-03-10T11:52:33.483855+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.380500+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.380500+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.393280+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.393280+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.401148+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.401148+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.922038+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.105:0/1830656990' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:52:35.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:35 vm05 bash[68966]: audit 2026-03-10T11:52:34.922038+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.105:0/1830656990' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: cluster 2026-03-10T11:52:33.483855+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: cluster 2026-03-10T11:52:33.483855+0000 mgr.y (mgr.44107) 402 : cluster [DBG] pgmap v196: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.380500+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.380500+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.393280+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.393280+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.401148+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.401148+0000 mon.c (mon.1) 448 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.922038+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.105:0/1830656990' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:52:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:35 vm07 bash[46158]: audit 2026-03-10T11:52:34.922038+0000 mon.a (mon.0) 633 : audit [DBG] from='client.? 192.168.123.105:0/1830656990' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-10T11:52:36.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:36 vm05 bash[65415]: audit 2026-03-10T11:52:35.086194+0000 mon.c (mon.1) 449 : audit [INF] from='client.? 192.168.123.105:0/4006041742' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:36 vm05 bash[65415]: audit 2026-03-10T11:52:35.086194+0000 mon.c (mon.1) 449 : audit [INF] from='client.? 192.168.123.105:0/4006041742' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:36 vm05 bash[65415]: audit 2026-03-10T11:52:35.086730+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:36 vm05 bash[65415]: audit 2026-03-10T11:52:35.086730+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:36 vm05 bash[65415]: audit 2026-03-10T11:52:35.496328+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:36 vm05 bash[65415]: audit 2026-03-10T11:52:35.496328+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:36 vm05 bash[68966]: audit 2026-03-10T11:52:35.086194+0000 mon.c (mon.1) 449 : audit [INF] from='client.? 192.168.123.105:0/4006041742' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:36 vm05 bash[68966]: audit 2026-03-10T11:52:35.086194+0000 mon.c (mon.1) 449 : audit [INF] from='client.? 192.168.123.105:0/4006041742' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:36 vm05 bash[68966]: audit 2026-03-10T11:52:35.086730+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:36 vm05 bash[68966]: audit 2026-03-10T11:52:35.086730+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:36 vm05 bash[68966]: audit 2026-03-10T11:52:35.496328+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:36.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:36 vm05 bash[68966]: audit 2026-03-10T11:52:35.496328+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:36 vm07 bash[46158]: audit 2026-03-10T11:52:35.086194+0000 mon.c (mon.1) 449 : audit [INF] from='client.? 192.168.123.105:0/4006041742' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:36 vm07 bash[46158]: audit 2026-03-10T11:52:35.086194+0000 mon.c (mon.1) 449 : audit [INF] from='client.? 192.168.123.105:0/4006041742' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:36 vm07 bash[46158]: audit 2026-03-10T11:52:35.086730+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:36 vm07 bash[46158]: audit 2026-03-10T11:52:35.086730+0000 mon.a (mon.0) 634 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]: dispatch 2026-03-10T11:52:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:36 vm07 bash[46158]: audit 2026-03-10T11:52:35.496328+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:36 vm07 bash[46158]: audit 2026-03-10T11:52:35.496328+0000 mon.c (mon.1) 450 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:37.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: cluster 2026-03-10T11:52:35.484237+0000 mgr.y (mgr.44107) 403 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: cluster 2026-03-10T11:52:35.484237+0000 mgr.y (mgr.44107) 403 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: audit 2026-03-10T11:52:36.020113+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]': finished 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: audit 2026-03-10T11:52:36.020113+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]': finished 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: cluster 2026-03-10T11:52:36.027683+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: cluster 2026-03-10T11:52:36.027683+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: audit 2026-03-10T11:52:36.192026+0000 mon.c (mon.1) 451 : audit [INF] from='client.? 192.168.123.105:0/1690594238' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: audit 2026-03-10T11:52:36.192026+0000 mon.c (mon.1) 451 : audit [INF] from='client.? 192.168.123.105:0/1690594238' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: audit 2026-03-10T11:52:36.192579+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:37 vm05 bash[65415]: audit 2026-03-10T11:52:36.192579+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: cluster 2026-03-10T11:52:35.484237+0000 mgr.y (mgr.44107) 403 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: cluster 2026-03-10T11:52:35.484237+0000 mgr.y (mgr.44107) 403 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: audit 2026-03-10T11:52:36.020113+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]': finished 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: audit 2026-03-10T11:52:36.020113+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]': finished 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: cluster 2026-03-10T11:52:36.027683+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: cluster 2026-03-10T11:52:36.027683+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: audit 2026-03-10T11:52:36.192026+0000 mon.c (mon.1) 451 : audit [INF] from='client.? 192.168.123.105:0/1690594238' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: audit 2026-03-10T11:52:36.192026+0000 mon.c (mon.1) 451 : audit [INF] from='client.? 192.168.123.105:0/1690594238' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: audit 2026-03-10T11:52:36.192579+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:37 vm05 bash[68966]: audit 2026-03-10T11:52:36.192579+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: cluster 2026-03-10T11:52:35.484237+0000 mgr.y (mgr.44107) 403 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: cluster 2026-03-10T11:52:35.484237+0000 mgr.y (mgr.44107) 403 : cluster [DBG] pgmap v197: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: audit 2026-03-10T11:52:36.020113+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]': finished 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: audit 2026-03-10T11:52:36.020113+0000 mon.a (mon.0) 635 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/1131856542"}]': finished 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: cluster 2026-03-10T11:52:36.027683+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: cluster 2026-03-10T11:52:36.027683+0000 mon.a (mon.0) 636 : cluster [DBG] osdmap e144: 8 total, 8 up, 8 in 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: audit 2026-03-10T11:52:36.192026+0000 mon.c (mon.1) 451 : audit [INF] from='client.? 192.168.123.105:0/1690594238' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: audit 2026-03-10T11:52:36.192026+0000 mon.c (mon.1) 451 : audit [INF] from='client.? 192.168.123.105:0/1690594238' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: audit 2026-03-10T11:52:36.192579+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:37 vm07 bash[46158]: audit 2026-03-10T11:52:36.192579+0000 mon.a (mon.0) 637 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]: dispatch 2026-03-10T11:52:38.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:38 vm05 bash[65415]: audit 2026-03-10T11:52:37.035515+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]': finished 2026-03-10T11:52:38.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:38 vm05 bash[65415]: audit 2026-03-10T11:52:37.035515+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]': finished 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:38 vm05 bash[65415]: cluster 2026-03-10T11:52:37.038379+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:38 vm05 bash[65415]: cluster 2026-03-10T11:52:37.038379+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:38 vm05 bash[65415]: audit 2026-03-10T11:52:37.215169+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]: dispatch 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:38 vm05 bash[65415]: audit 2026-03-10T11:52:37.215169+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]: dispatch 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:38 vm05 bash[68966]: audit 2026-03-10T11:52:37.035515+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]': finished 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:38 vm05 bash[68966]: audit 2026-03-10T11:52:37.035515+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]': finished 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:38 vm05 bash[68966]: cluster 2026-03-10T11:52:37.038379+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:38 vm05 bash[68966]: cluster 2026-03-10T11:52:37.038379+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:38 vm05 bash[68966]: audit 2026-03-10T11:52:37.215169+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]: dispatch 2026-03-10T11:52:38.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:38 vm05 bash[68966]: audit 2026-03-10T11:52:37.215169+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]: dispatch 2026-03-10T11:52:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:38 vm07 bash[46158]: audit 2026-03-10T11:52:37.035515+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]': finished 2026-03-10T11:52:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:38 vm07 bash[46158]: audit 2026-03-10T11:52:37.035515+0000 mon.a (mon.0) 638 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/4246311496"}]': finished 2026-03-10T11:52:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:38 vm07 bash[46158]: cluster 2026-03-10T11:52:37.038379+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T11:52:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:38 vm07 bash[46158]: cluster 2026-03-10T11:52:37.038379+0000 mon.a (mon.0) 639 : cluster [DBG] osdmap e145: 8 total, 8 up, 8 in 2026-03-10T11:52:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:38 vm07 bash[46158]: audit 2026-03-10T11:52:37.215169+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]: dispatch 2026-03-10T11:52:38.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:38 vm07 bash[46158]: audit 2026-03-10T11:52:37.215169+0000 mon.a (mon.0) 640 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: cluster 2026-03-10T11:52:37.484537+0000 mgr.y (mgr.44107) 404 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: cluster 2026-03-10T11:52:37.484537+0000 mgr.y (mgr.44107) 404 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: audit 2026-03-10T11:52:38.054284+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]': finished 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: audit 2026-03-10T11:52:38.054284+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]': finished 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: cluster 2026-03-10T11:52:38.063908+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: cluster 2026-03-10T11:52:38.063908+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: audit 2026-03-10T11:52:38.220506+0000 mon.c (mon.1) 452 : audit [INF] from='client.? 192.168.123.105:0/2005362648' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: audit 2026-03-10T11:52:38.220506+0000 mon.c (mon.1) 452 : audit [INF] from='client.? 192.168.123.105:0/2005362648' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: audit 2026-03-10T11:52:38.220985+0000 mon.a (mon.0) 643 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:39 vm05 bash[68966]: audit 2026-03-10T11:52:38.220985+0000 mon.a (mon.0) 643 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:52:38] "GET /metrics HTTP/1.1" 200 37921 "" "Prometheus/2.51.0" 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: cluster 2026-03-10T11:52:37.484537+0000 mgr.y (mgr.44107) 404 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: cluster 2026-03-10T11:52:37.484537+0000 mgr.y (mgr.44107) 404 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: audit 2026-03-10T11:52:38.054284+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]': finished 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: audit 2026-03-10T11:52:38.054284+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]': finished 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: cluster 2026-03-10T11:52:38.063908+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: cluster 2026-03-10T11:52:38.063908+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: audit 2026-03-10T11:52:38.220506+0000 mon.c (mon.1) 452 : audit [INF] from='client.? 192.168.123.105:0/2005362648' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: audit 2026-03-10T11:52:38.220506+0000 mon.c (mon.1) 452 : audit [INF] from='client.? 192.168.123.105:0/2005362648' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: audit 2026-03-10T11:52:38.220985+0000 mon.a (mon.0) 643 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:39 vm05 bash[65415]: audit 2026-03-10T11:52:38.220985+0000 mon.a (mon.0) 643 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: cluster 2026-03-10T11:52:37.484537+0000 mgr.y (mgr.44107) 404 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: cluster 2026-03-10T11:52:37.484537+0000 mgr.y (mgr.44107) 404 : cluster [DBG] pgmap v200: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: audit 2026-03-10T11:52:38.054284+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]': finished 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: audit 2026-03-10T11:52:38.054284+0000 mon.a (mon.0) 641 : audit [INF] from='client.? 192.168.123.105:0/1311191644' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6800/3594246957"}]': finished 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: cluster 2026-03-10T11:52:38.063908+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: cluster 2026-03-10T11:52:38.063908+0000 mon.a (mon.0) 642 : cluster [DBG] osdmap e146: 8 total, 8 up, 8 in 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: audit 2026-03-10T11:52:38.220506+0000 mon.c (mon.1) 452 : audit [INF] from='client.? 192.168.123.105:0/2005362648' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: audit 2026-03-10T11:52:38.220506+0000 mon.c (mon.1) 452 : audit [INF] from='client.? 192.168.123.105:0/2005362648' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: audit 2026-03-10T11:52:38.220985+0000 mon.a (mon.0) 643 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:39 vm07 bash[46158]: audit 2026-03-10T11:52:38.220985+0000 mon.a (mon.0) 643 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]: dispatch 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.065815+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]': finished 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.065815+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]': finished 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: cluster 2026-03-10T11:52:39.069395+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: cluster 2026-03-10T11:52:39.069395+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.234815+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]: dispatch 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.234815+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]: dispatch 2026-03-10T11:52:40.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.632827+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.632827+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.639688+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:40 vm05 bash[65415]: audit 2026-03-10T11:52:39.639688+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.065815+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]': finished 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.065815+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]': finished 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: cluster 2026-03-10T11:52:39.069395+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: cluster 2026-03-10T11:52:39.069395+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.234815+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]: dispatch 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.234815+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]: dispatch 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.632827+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.632827+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.639688+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:40 vm05 bash[68966]: audit 2026-03-10T11:52:39.639688+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.065815+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]': finished 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.065815+0000 mon.a (mon.0) 644 : audit [INF] from='client.? ' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/3437771575"}]': finished 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: cluster 2026-03-10T11:52:39.069395+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: cluster 2026-03-10T11:52:39.069395+0000 mon.a (mon.0) 645 : cluster [DBG] osdmap e147: 8 total, 8 up, 8 in 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.234815+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]: dispatch 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.234815+0000 mon.a (mon.0) 646 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]: dispatch 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.632827+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.632827+0000 mon.a (mon.0) 647 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.639688+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:40.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:40 vm07 bash[46158]: audit 2026-03-10T11:52:39.639688+0000 mon.a (mon.0) 648 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: cluster 2026-03-10T11:52:39.485058+0000 mgr.y (mgr.44107) 405 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:52:41.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: cluster 2026-03-10T11:52:39.485058+0000 mgr.y (mgr.44107) 405 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.074162+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]': finished 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.074162+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]': finished 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: cluster 2026-03-10T11:52:40.087399+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: cluster 2026-03-10T11:52:40.087399+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.211604+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.211604+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.217707+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.217707+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.260267+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]: dispatch 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:41 vm05 bash[65415]: audit 2026-03-10T11:52:40.260267+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]: dispatch 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: cluster 2026-03-10T11:52:39.485058+0000 mgr.y (mgr.44107) 405 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: cluster 2026-03-10T11:52:39.485058+0000 mgr.y (mgr.44107) 405 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.074162+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]': finished 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.074162+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]': finished 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: cluster 2026-03-10T11:52:40.087399+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: cluster 2026-03-10T11:52:40.087399+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.211604+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.211604+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.217707+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.217707+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.260267+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]: dispatch 2026-03-10T11:52:41.340 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:41 vm05 bash[68966]: audit 2026-03-10T11:52:40.260267+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]: dispatch 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: cluster 2026-03-10T11:52:39.485058+0000 mgr.y (mgr.44107) 405 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: cluster 2026-03-10T11:52:39.485058+0000 mgr.y (mgr.44107) 405 : cluster [DBG] pgmap v203: 161 pgs: 161 active+clean; 457 KiB data, 287 MiB used, 160 GiB / 160 GiB avail 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.074162+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]': finished 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.074162+0000 mon.a (mon.0) 649 : audit [INF] from='client.? 192.168.123.105:0/471541049' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:6801/3594246957"}]': finished 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: cluster 2026-03-10T11:52:40.087399+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: cluster 2026-03-10T11:52:40.087399+0000 mon.a (mon.0) 650 : cluster [DBG] osdmap e148: 8 total, 8 up, 8 in 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.211604+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.211604+0000 mon.a (mon.0) 651 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.217707+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.217707+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.260267+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]: dispatch 2026-03-10T11:52:41.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:41 vm07 bash[46158]: audit 2026-03-10T11:52:40.260267+0000 mon.a (mon.0) 653 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]: dispatch 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:42 vm05 bash[65415]: audit 2026-03-10T11:52:41.224336+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]': finished 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:42 vm05 bash[65415]: audit 2026-03-10T11:52:41.224336+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]': finished 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:42 vm05 bash[65415]: cluster 2026-03-10T11:52:41.233534+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:42 vm05 bash[65415]: cluster 2026-03-10T11:52:41.233534+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:42 vm05 bash[65415]: cluster 2026-03-10T11:52:41.485363+0000 mgr.y (mgr.44107) 406 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:42 vm05 bash[65415]: cluster 2026-03-10T11:52:41.485363+0000 mgr.y (mgr.44107) 406 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:42 vm05 bash[68966]: audit 2026-03-10T11:52:41.224336+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]': finished 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:42 vm05 bash[68966]: audit 2026-03-10T11:52:41.224336+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]': finished 2026-03-10T11:52:42.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:42 vm05 bash[68966]: cluster 2026-03-10T11:52:41.233534+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T11:52:42.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:42 vm05 bash[68966]: cluster 2026-03-10T11:52:41.233534+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T11:52:42.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:42 vm05 bash[68966]: cluster 2026-03-10T11:52:41.485363+0000 mgr.y (mgr.44107) 406 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:42.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:42 vm05 bash[68966]: cluster 2026-03-10T11:52:41.485363+0000 mgr.y (mgr.44107) 406 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:42.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:42 vm07 bash[46158]: audit 2026-03-10T11:52:41.224336+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]': finished 2026-03-10T11:52:42.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:42 vm07 bash[46158]: audit 2026-03-10T11:52:41.224336+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.105:0/1042428739' entity='client.iscsi.foo.vm05.txapnk' cmd='[{"prefix": "osd blocklist", "blocklistop": "rm", "addr": "192.168.123.105:0/616014615"}]': finished 2026-03-10T11:52:42.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:42 vm07 bash[46158]: cluster 2026-03-10T11:52:41.233534+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T11:52:42.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:42 vm07 bash[46158]: cluster 2026-03-10T11:52:41.233534+0000 mon.a (mon.0) 655 : cluster [DBG] osdmap e149: 8 total, 8 up, 8 in 2026-03-10T11:52:42.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:42 vm07 bash[46158]: cluster 2026-03-10T11:52:41.485363+0000 mgr.y (mgr.44107) 406 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:42.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:42 vm07 bash[46158]: cluster 2026-03-10T11:52:41.485363+0000 mgr.y (mgr.44107) 406 : cluster [DBG] pgmap v206: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:44.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:44 vm05 bash[65415]: cluster 2026-03-10T11:52:43.485720+0000 mgr.y (mgr.44107) 407 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 943 B/s rd, 0 op/s 2026-03-10T11:52:44.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:44 vm05 bash[65415]: cluster 2026-03-10T11:52:43.485720+0000 mgr.y (mgr.44107) 407 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 943 B/s rd, 0 op/s 2026-03-10T11:52:44.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:44 vm05 bash[68966]: cluster 2026-03-10T11:52:43.485720+0000 mgr.y (mgr.44107) 407 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 943 B/s rd, 0 op/s 2026-03-10T11:52:44.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:44 vm05 bash[68966]: cluster 2026-03-10T11:52:43.485720+0000 mgr.y (mgr.44107) 407 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 943 B/s rd, 0 op/s 2026-03-10T11:52:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:44 vm07 bash[46158]: cluster 2026-03-10T11:52:43.485720+0000 mgr.y (mgr.44107) 407 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 943 B/s rd, 0 op/s 2026-03-10T11:52:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:44 vm07 bash[46158]: cluster 2026-03-10T11:52:43.485720+0000 mgr.y (mgr.44107) 407 : cluster [DBG] pgmap v207: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 943 B/s rd, 0 op/s 2026-03-10T11:52:45.803 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:45 vm05 bash[65415]: audit 2026-03-10T11:52:44.779984+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:45.803 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:45 vm05 bash[65415]: audit 2026-03-10T11:52:44.779984+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:45.803 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:45 vm05 bash[68966]: audit 2026-03-10T11:52:44.779984+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:45.803 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:45 vm05 bash[68966]: audit 2026-03-10T11:52:44.779984+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:45 vm07 bash[46158]: audit 2026-03-10T11:52:44.779984+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:45 vm07 bash[46158]: audit 2026-03-10T11:52:44.779984+0000 mgr.y (mgr.44107) 408 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cluster 2026-03-10T11:52:45.486136+0000 mgr.y (mgr.44107) 409 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cluster 2026-03-10T11:52:45.486136+0000 mgr.y (mgr.44107) 409 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.768714+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.768714+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.777348+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.777348+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.781373+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.781373+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.782385+0000 mon.c (mon.1) 454 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.782385+0000 mon.c (mon.1) 454 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.787312+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.787312+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.803737+0000 mon.c (mon.1) 455 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.803737+0000 mon.c (mon.1) 455 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.804591+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.804591+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.814381+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.814381+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.818169+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.818169+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.818418+0000 mon.c (mon.1) 456 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.818418+0000 mon.c (mon.1) 456 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.818809+0000 mgr.y (mgr.44107) 412 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.818809+0000 mgr.y (mgr.44107) 412 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.820255+0000 mon.c (mon.1) 457 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.820255+0000 mon.c (mon.1) 457 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.820650+0000 mgr.y (mgr.44107) 413 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.820650+0000 mgr.y (mgr.44107) 413 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.824387+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.824387+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.855092+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.855092+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.856983+0000 mon.c (mon.1) 459 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.090 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.856983+0000 mon.c (mon.1) 459 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.858258+0000 mon.c (mon.1) 460 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.858258+0000 mon.c (mon.1) 460 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.859196+0000 mon.c (mon.1) 461 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.859196+0000 mon.c (mon.1) 461 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.860653+0000 mon.c (mon.1) 462 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.860653+0000 mon.c (mon.1) 462 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.862284+0000 mon.c (mon.1) 463 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.862284+0000 mon.c (mon.1) 463 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.863336+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.863336+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.864253+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.864253+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.865168+0000 mon.c (mon.1) 466 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.865168+0000 mon.c (mon.1) 466 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.866059+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.866059+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.867026+0000 mon.c (mon.1) 468 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.867026+0000 mon.c (mon.1) 468 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.867637+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.867637+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.872087+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.872087+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.874690+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.874690+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.874893+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.874893+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.878041+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]': finished 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.878041+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]': finished 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.880834+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.880834+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.881385+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.881385+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.884966+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.884966+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.888164+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.888164+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.888703+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.888703+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.892651+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.892651+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.895505+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.895505+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.896472+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.896472+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.897365+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.897365+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.898230+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.898230+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.898975+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.898975+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.899760+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.899760+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.900239+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.900239+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.900987+0000 mon.c (mon.1) 478 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.900987+0000 mon.c (mon.1) 478 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.901154+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.901154+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.904949+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.904949+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:47.091 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.907272+0000 mon.c (mon.1) 479 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.907272+0000 mon.c (mon.1) 479 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.907467+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.907467+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.910799+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.910799+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.913051+0000 mon.c (mon.1) 480 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.913051+0000 mon.c (mon.1) 480 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cluster 2026-03-10T11:52:45.486136+0000 mgr.y (mgr.44107) 409 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cluster 2026-03-10T11:52:45.486136+0000 mgr.y (mgr.44107) 409 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.768714+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.768714+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.777348+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.777348+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.781373+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.781373+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.782385+0000 mon.c (mon.1) 454 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.782385+0000 mon.c (mon.1) 454 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.787312+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.787312+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.803737+0000 mon.c (mon.1) 455 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.803737+0000 mon.c (mon.1) 455 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.804591+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.804591+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.814381+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.814381+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.818169+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.818169+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.818418+0000 mon.c (mon.1) 456 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.818418+0000 mon.c (mon.1) 456 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.818809+0000 mgr.y (mgr.44107) 412 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.818809+0000 mgr.y (mgr.44107) 412 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.820255+0000 mon.c (mon.1) 457 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.820255+0000 mon.c (mon.1) 457 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.820650+0000 mgr.y (mgr.44107) 413 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.820650+0000 mgr.y (mgr.44107) 413 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.824387+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.824387+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.855092+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.855092+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.856983+0000 mon.c (mon.1) 459 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.856983+0000 mon.c (mon.1) 459 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.858258+0000 mon.c (mon.1) 460 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.858258+0000 mon.c (mon.1) 460 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.859196+0000 mon.c (mon.1) 461 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.859196+0000 mon.c (mon.1) 461 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.860653+0000 mon.c (mon.1) 462 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.860653+0000 mon.c (mon.1) 462 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.862284+0000 mon.c (mon.1) 463 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.862284+0000 mon.c (mon.1) 463 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.863336+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.863336+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.864253+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.864253+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.865168+0000 mon.c (mon.1) 466 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.865168+0000 mon.c (mon.1) 466 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.866059+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.092 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.866059+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.867026+0000 mon.c (mon.1) 468 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.867026+0000 mon.c (mon.1) 468 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.867637+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.867637+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.872087+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.872087+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.874690+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.874690+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.874893+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.874893+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.878041+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.913242+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.913242+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.916565+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.916565+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.918765+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.918765+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.918956+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.918956+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.921857+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.921857+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.923966+0000 mon.c (mon.1) 482 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.923966+0000 mon.c (mon.1) 482 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.924153+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.924153+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.927226+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.927226+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.929406+0000 mon.c (mon.1) 483 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.929406+0000 mon.c (mon.1) 483 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.929590+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.929590+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.933039+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.933039+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.935301+0000 mon.c (mon.1) 484 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.935301+0000 mon.c (mon.1) 484 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.935481+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.935481+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.938551+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.938551+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.940775+0000 mon.c (mon.1) 485 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.940775+0000 mon.c (mon.1) 485 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.940957+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.940957+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.941446+0000 mon.c (mon.1) 486 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.941446+0000 mon.c (mon.1) 486 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.941611+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.941611+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.944853+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.944853+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.946981+0000 mon.c (mon.1) 487 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.946981+0000 mon.c (mon.1) 487 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.947394+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.947394+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.950792+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.950792+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.952993+0000 mon.c (mon.1) 488 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.952993+0000 mon.c (mon.1) 488 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.953189+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.953189+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.957190+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.957190+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:47.093 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.959558+0000 mon.c (mon.1) 489 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.959558+0000 mon.c (mon.1) 489 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.959767+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.959767+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.964058+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.964058+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.966253+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.966253+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.966450+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.966450+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.966934+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.966934+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.967091+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.967091+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.967531+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.967531+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.967713+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.967713+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968143+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968143+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968301+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968301+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968771+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968771+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968925+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.968925+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.969541+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.969541+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.969699+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.969699+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.970079+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: cephadm 2026-03-10T11:52:45.970079+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.970330+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.970330+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.970495+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.970495+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.974772+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.974772+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.975357+0000 mon.c (mon.1) 497 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.975357+0000 mon.c (mon.1) 497 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.976732+0000 mon.c (mon.1) 498 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.976732+0000 mon.c (mon.1) 498 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.977206+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.977206+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.982775+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:45.982775+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.024717+0000 mon.c (mon.1) 500 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.024717+0000 mon.c (mon.1) 500 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.026146+0000 mon.c (mon.1) 501 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.026146+0000 mon.c (mon.1) 501 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.026757+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.026757+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.032100+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.094 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:46 vm05 bash[65415]: audit 2026-03-10T11:52:46.032100+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.878041+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.880834+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.880834+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.881385+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.881385+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.884966+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.884966+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.888164+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.888164+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.888703+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.888703+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.892651+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.892651+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.895505+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.895505+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.896472+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.896472+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.897365+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.897365+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.898230+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.898230+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.898975+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.898975+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.899760+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.899760+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.900239+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.900239+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.900987+0000 mon.c (mon.1) 478 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.900987+0000 mon.c (mon.1) 478 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.901154+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.901154+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.904949+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.904949+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.907272+0000 mon.c (mon.1) 479 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.907272+0000 mon.c (mon.1) 479 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.907467+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.907467+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.910799+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.910799+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.913051+0000 mon.c (mon.1) 480 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.913051+0000 mon.c (mon.1) 480 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.913242+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.913242+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.916565+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.916565+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.918765+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.918765+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.918956+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.918956+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.921857+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.921857+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.923966+0000 mon.c (mon.1) 482 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.923966+0000 mon.c (mon.1) 482 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.924153+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.924153+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.927226+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.927226+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.929406+0000 mon.c (mon.1) 483 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.929406+0000 mon.c (mon.1) 483 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.929590+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.929590+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.933039+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.933039+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:47.095 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.935301+0000 mon.c (mon.1) 484 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.935301+0000 mon.c (mon.1) 484 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.935481+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.935481+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.938551+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.938551+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.940775+0000 mon.c (mon.1) 485 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.940775+0000 mon.c (mon.1) 485 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.940957+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.940957+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.941446+0000 mon.c (mon.1) 486 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.941446+0000 mon.c (mon.1) 486 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.941611+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.941611+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.944853+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.944853+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.946981+0000 mon.c (mon.1) 487 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.946981+0000 mon.c (mon.1) 487 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.947394+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.947394+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.950792+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.950792+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.952993+0000 mon.c (mon.1) 488 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.952993+0000 mon.c (mon.1) 488 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.953189+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.953189+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.957190+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.957190+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.959558+0000 mon.c (mon.1) 489 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.959558+0000 mon.c (mon.1) 489 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.959767+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.959767+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.964058+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.964058+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.966253+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.966253+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.966450+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.966450+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.966934+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.966934+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.967091+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.967091+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.967531+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.967531+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.967713+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.967713+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968143+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968143+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968301+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968301+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968771+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968771+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968925+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.968925+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.969541+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.969541+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.969699+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.969699+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.970079+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: cephadm 2026-03-10T11:52:45.970079+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.970330+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.970330+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.970495+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.970495+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.974772+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.974772+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:47.096 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.975357+0000 mon.c (mon.1) 497 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.975357+0000 mon.c (mon.1) 497 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.976732+0000 mon.c (mon.1) 498 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.976732+0000 mon.c (mon.1) 498 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.977206+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.977206+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.982775+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:45.982775+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.024717+0000 mon.c (mon.1) 500 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.024717+0000 mon.c (mon.1) 500 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.026146+0000 mon.c (mon.1) 501 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.026146+0000 mon.c (mon.1) 501 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.026757+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.026757+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.032100+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.097 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:46 vm05 bash[68966]: audit 2026-03-10T11:52:46.032100+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cluster 2026-03-10T11:52:45.486136+0000 mgr.y (mgr.44107) 409 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cluster 2026-03-10T11:52:45.486136+0000 mgr.y (mgr.44107) 409 : cluster [DBG] pgmap v208: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.6 KiB/s rd, 1 op/s 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.768714+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.768714+0000 mon.a (mon.0) 656 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.777348+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.777348+0000 mon.a (mon.0) 657 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.781373+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.781373+0000 mon.c (mon.1) 453 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.782385+0000 mon.c (mon.1) 454 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.782385+0000 mon.c (mon.1) 454 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.787312+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.787312+0000 mon.a (mon.0) 658 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.803737+0000 mon.c (mon.1) 455 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.803737+0000 mon.c (mon.1) 455 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.804591+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.804591+0000 mgr.y (mgr.44107) 410 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-list"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.814381+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.814381+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.818169+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.818169+0000 mgr.y (mgr.44107) 411 : cephadm [INF] Adding iSCSI gateway http://:@192.168.123.105:5000 to Dashboard 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.818418+0000 mon.c (mon.1) 456 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.818418+0000 mon.c (mon.1) 456 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.818809+0000 mgr.y (mgr.44107) 412 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.818809+0000 mgr.y (mgr.44107) 412 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard set-iscsi-api-ssl-verification", "value": "true"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.820255+0000 mon.c (mon.1) 457 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.820255+0000 mon.c (mon.1) 457 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.820650+0000 mgr.y (mgr.44107) 413 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.820650+0000 mgr.y (mgr.44107) 413 : audit [DBG] from='mon.1 -' entity='mon.' cmd=[{"prefix": "dashboard iscsi-gateway-add", "name": "vm05"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.824387+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.824387+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.855092+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.855092+0000 mon.c (mon.1) 458 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.856983+0000 mon.c (mon.1) 459 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.856983+0000 mon.c (mon.1) 459 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.858258+0000 mon.c (mon.1) 460 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.858258+0000 mon.c (mon.1) 460 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.859196+0000 mon.c (mon.1) 461 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.859196+0000 mon.c (mon.1) 461 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.860653+0000 mon.c (mon.1) 462 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.860653+0000 mon.c (mon.1) 462 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.862284+0000 mon.c (mon.1) 463 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.862284+0000 mon.c (mon.1) 463 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.863336+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.863336+0000 mon.c (mon.1) 464 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.864253+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.864253+0000 mon.c (mon.1) 465 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.865168+0000 mon.c (mon.1) 466 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.865168+0000 mon.c (mon.1) 466 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.866059+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.866059+0000 mon.c (mon.1) 467 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.867026+0000 mon.c (mon.1) 468 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.867026+0000 mon.c (mon.1) 468 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.867637+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.867637+0000 mgr.y (mgr.44107) 414 : cephadm [INF] Upgrade: Setting container_image for all iscsi 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.872087+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.872087+0000 mon.a (mon.0) 661 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.874690+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.874690+0000 mon.c (mon.1) 469 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.874893+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.874893+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.878041+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]': finished 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.878041+0000 mon.a (mon.0) 663 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi.foo.vm05.txapnk"}]': finished 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.880834+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.880834+0000 mon.c (mon.1) 470 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.881385+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.881385+0000 mgr.y (mgr.44107) 415 : cephadm [INF] Upgrade: Setting container_image for all nfs 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.884966+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.884966+0000 mon.a (mon.0) 664 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.888164+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.888164+0000 mon.c (mon.1) 471 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.888703+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:47.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.888703+0000 mgr.y (mgr.44107) 416 : cephadm [INF] Upgrade: Setting container_image for all nvmeof 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.892651+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.892651+0000 mon.a (mon.0) 665 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.895505+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.895505+0000 mon.c (mon.1) 472 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.896472+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.896472+0000 mon.c (mon.1) 473 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.897365+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.897365+0000 mon.c (mon.1) 474 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.898230+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.898230+0000 mon.c (mon.1) 475 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.898975+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.898975+0000 mon.c (mon.1) 476 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.899760+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.899760+0000 mon.c (mon.1) 477 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.900239+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.900239+0000 mgr.y (mgr.44107) 417 : cephadm [INF] Upgrade: Finalizing container_image settings 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.900987+0000 mon.c (mon.1) 478 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.900987+0000 mon.c (mon.1) 478 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.901154+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.901154+0000 mon.a (mon.0) 666 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.904949+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.904949+0000 mon.a (mon.0) 667 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mgr"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.907272+0000 mon.c (mon.1) 479 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.907272+0000 mon.c (mon.1) 479 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.907467+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.907467+0000 mon.a (mon.0) 668 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.910799+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.910799+0000 mon.a (mon.0) 669 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mon"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.913051+0000 mon.c (mon.1) 480 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.913051+0000 mon.c (mon.1) 480 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.913242+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.913242+0000 mon.a (mon.0) 670 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.916565+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.916565+0000 mon.a (mon.0) 671 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.crash"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.918765+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.918765+0000 mon.c (mon.1) 481 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.918956+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.918956+0000 mon.a (mon.0) 672 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "osd"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.921857+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.921857+0000 mon.a (mon.0) 673 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "osd"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.923966+0000 mon.c (mon.1) 482 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.923966+0000 mon.c (mon.1) 482 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.924153+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.924153+0000 mon.a (mon.0) 674 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mds"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.927226+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.927226+0000 mon.a (mon.0) 675 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "mds"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.929406+0000 mon.c (mon.1) 483 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.929406+0000 mon.c (mon.1) 483 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.929590+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.929590+0000 mon.a (mon.0) 676 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.933039+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.933039+0000 mon.a (mon.0) 677 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rgw"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.935301+0000 mon.c (mon.1) 484 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.935301+0000 mon.c (mon.1) 484 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.935481+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.935481+0000 mon.a (mon.0) 678 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.938551+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.938551+0000 mon.a (mon.0) 679 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.rbd-mirror"}]': finished 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.940775+0000 mon.c (mon.1) 485 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.940775+0000 mon.c (mon.1) 485 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.940957+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.940957+0000 mon.a (mon.0) 680 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.941446+0000 mon.c (mon.1) 486 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.941446+0000 mon.c (mon.1) 486 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.941611+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.941611+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]: dispatch 2026-03-10T11:52:47.198 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.944853+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.944853+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.ceph-exporter"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.946981+0000 mon.c (mon.1) 487 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.946981+0000 mon.c (mon.1) 487 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.947394+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.947394+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.950792+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.950792+0000 mon.a (mon.0) 684 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.iscsi"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.952993+0000 mon.c (mon.1) 488 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.952993+0000 mon.c (mon.1) 488 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.953189+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.953189+0000 mon.a (mon.0) 685 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.957190+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.957190+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nfs"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.959558+0000 mon.c (mon.1) 489 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.959558+0000 mon.c (mon.1) 489 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.959767+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.959767+0000 mon.a (mon.0) 687 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.964058+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.964058+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix": "config rm", "name": "container_image", "who": "client.nvmeof"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.966253+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.966253+0000 mon.c (mon.1) 490 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.966450+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.966450+0000 mon.a (mon.0) 689 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.966934+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.966934+0000 mon.c (mon.1) 491 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.967091+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.967091+0000 mon.a (mon.0) 690 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.967531+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.967531+0000 mon.c (mon.1) 492 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.967713+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.967713+0000 mon.a (mon.0) 691 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968143+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968143+0000 mon.c (mon.1) 493 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968301+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968301+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968771+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968771+0000 mon.c (mon.1) 494 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968925+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.968925+0000 mon.a (mon.0) 693 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.969541+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.969541+0000 mon.c (mon.1) 495 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.969699+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.969699+0000 mon.a (mon.0) 694 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix": "config rm", "name": "container_image", "who": "mon"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.970079+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: cephadm 2026-03-10T11:52:45.970079+0000 mgr.y (mgr.44107) 418 : cephadm [INF] Upgrade: Complete! 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.970330+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.970330+0000 mon.c (mon.1) 496 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.970495+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.970495+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd=[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.974772+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.974772+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.44107 ' entity='mgr.y' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/upgrade_state"}]': finished 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.975357+0000 mon.c (mon.1) 497 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.975357+0000 mon.c (mon.1) 497 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.976732+0000 mon.c (mon.1) 498 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.976732+0000 mon.c (mon.1) 498 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.977206+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.977206+0000 mon.c (mon.1) 499 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.982775+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:45.982775+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.199 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.024717+0000 mon.c (mon.1) 500 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.024717+0000 mon.c (mon.1) 500 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.026146+0000 mon.c (mon.1) 501 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.026146+0000 mon.c (mon.1) 501 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.026757+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.026757+0000 mon.c (mon.1) 502 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.032100+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:47.200 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:46 vm07 bash[46158]: audit 2026-03-10T11:52:46.032100+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:49.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:48 vm05 bash[65415]: cluster 2026-03-10T11:52:47.486638+0000 mgr.y (mgr.44107) 419 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:49.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:48 vm05 bash[65415]: cluster 2026-03-10T11:52:47.486638+0000 mgr.y (mgr.44107) 419 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:49.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:48 vm05 bash[68966]: cluster 2026-03-10T11:52:47.486638+0000 mgr.y (mgr.44107) 419 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:49.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:48 vm05 bash[68966]: cluster 2026-03-10T11:52:47.486638+0000 mgr.y (mgr.44107) 419 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:49.090 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:52:48] "GET /metrics HTTP/1.1" 200 37921 "" "Prometheus/2.51.0" 2026-03-10T11:52:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:48 vm07 bash[46158]: cluster 2026-03-10T11:52:47.486638+0000 mgr.y (mgr.44107) 419 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:49.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:48 vm07 bash[46158]: cluster 2026-03-10T11:52:47.486638+0000 mgr.y (mgr.44107) 419 : cluster [DBG] pgmap v209: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:50 vm05 bash[65415]: cluster 2026-03-10T11:52:49.487075+0000 mgr.y (mgr.44107) 420 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:50 vm05 bash[65415]: cluster 2026-03-10T11:52:49.487075+0000 mgr.y (mgr.44107) 420 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:50 vm05 bash[65415]: audit 2026-03-10T11:52:50.496743+0000 mon.c (mon.1) 503 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:50 vm05 bash[65415]: audit 2026-03-10T11:52:50.496743+0000 mon.c (mon.1) 503 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:50 vm05 bash[68966]: cluster 2026-03-10T11:52:49.487075+0000 mgr.y (mgr.44107) 420 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:50 vm05 bash[68966]: cluster 2026-03-10T11:52:49.487075+0000 mgr.y (mgr.44107) 420 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:50 vm05 bash[68966]: audit 2026-03-10T11:52:50.496743+0000 mon.c (mon.1) 503 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:51.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:50 vm05 bash[68966]: audit 2026-03-10T11:52:50.496743+0000 mon.c (mon.1) 503 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:51.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:50 vm07 bash[46158]: cluster 2026-03-10T11:52:49.487075+0000 mgr.y (mgr.44107) 420 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:51.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:50 vm07 bash[46158]: cluster 2026-03-10T11:52:49.487075+0000 mgr.y (mgr.44107) 420 : cluster [DBG] pgmap v210: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-10T11:52:51.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:50 vm07 bash[46158]: audit 2026-03-10T11:52:50.496743+0000 mon.c (mon.1) 503 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:51.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:50 vm07 bash[46158]: audit 2026-03-10T11:52:50.496743+0000 mon.c (mon.1) 503 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:52:52.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:51 vm07 bash[46158]: audit 2026-03-10T11:52:50.921837+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:52.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:51 vm07 bash[46158]: audit 2026-03-10T11:52:50.921837+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:52.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:51 vm05 bash[65415]: audit 2026-03-10T11:52:50.921837+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:52.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:51 vm05 bash[65415]: audit 2026-03-10T11:52:50.921837+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:52.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:51 vm05 bash[68966]: audit 2026-03-10T11:52:50.921837+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:52.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:51 vm05 bash[68966]: audit 2026-03-10T11:52:50.921837+0000 mon.a (mon.0) 699 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:52:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:52 vm07 bash[46158]: cluster 2026-03-10T11:52:51.487480+0000 mgr.y (mgr.44107) 421 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 998 B/s rd, 0 op/s 2026-03-10T11:52:53.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:52 vm07 bash[46158]: cluster 2026-03-10T11:52:51.487480+0000 mgr.y (mgr.44107) 421 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 998 B/s rd, 0 op/s 2026-03-10T11:52:53.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:52 vm05 bash[65415]: cluster 2026-03-10T11:52:51.487480+0000 mgr.y (mgr.44107) 421 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 998 B/s rd, 0 op/s 2026-03-10T11:52:53.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:52 vm05 bash[65415]: cluster 2026-03-10T11:52:51.487480+0000 mgr.y (mgr.44107) 421 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 998 B/s rd, 0 op/s 2026-03-10T11:52:53.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:52 vm05 bash[68966]: cluster 2026-03-10T11:52:51.487480+0000 mgr.y (mgr.44107) 421 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 998 B/s rd, 0 op/s 2026-03-10T11:52:53.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:52 vm05 bash[68966]: cluster 2026-03-10T11:52:51.487480+0000 mgr.y (mgr.44107) 421 : cluster [DBG] pgmap v211: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 998 B/s rd, 0 op/s 2026-03-10T11:52:53.415 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (19m) 14s ago 26m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (6m) 56s ago 26m 66.9M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (19s) 14s ago 25m 75.9M - 3.9 654f31e6858e 82cca4edc1e9 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (7m) 56s ago 28m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (16m) 14s ago 29m 539M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (5m) 14s ago 29m 58.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (6m) 56s ago 29m 50.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (5m) 14s ago 29m 53.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (19m) 14s ago 26m 8143k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (19m) 56s ago 26m 7919k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (3m) 14s ago 28m 55.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (2m) 14s ago 28m 53.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (4m) 14s ago 28m 51.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (4m) 14s ago 27m 75.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (2m) 56s ago 27m 53.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (2m) 56s ago 27m 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (113s) 56s ago 27m 68.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (98s) 56s ago 26m 69.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e86e1860ea0d 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (7m) 56s ago 26m 45.5M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (63s) 14s ago 25m 91.4M - 19.2.3-678-ge911bdeb 654f31e6858e 41b55296180b 2026-03-10T11:52:53.883 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (62s) 56s ago 25m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e 8f8f41b99bda 2026-03-10T11:52:53.929 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 15 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:52:54.385 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:52:54.432 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'echo "wait for servicemap items w/ changing names to refresh"' 2026-03-10T11:52:54.667 INFO:teuthology.orchestra.run.vm05.stdout:wait for servicemap items w/ changing names to refresh 2026-03-10T11:52:54.702 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'sleep 60' 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: audit 2026-03-10T11:52:53.358350+0000 mgr.y (mgr.44107) 422 : audit [DBG] from='client.54714 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: audit 2026-03-10T11:52:53.358350+0000 mgr.y (mgr.44107) 422 : audit [DBG] from='client.54714 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: cluster 2026-03-10T11:52:53.487858+0000 mgr.y (mgr.44107) 423 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: cluster 2026-03-10T11:52:53.487858+0000 mgr.y (mgr.44107) 423 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: audit 2026-03-10T11:52:53.885017+0000 mgr.y (mgr.44107) 424 : audit [DBG] from='client.34643 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: audit 2026-03-10T11:52:53.885017+0000 mgr.y (mgr.44107) 424 : audit [DBG] from='client.34643 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: audit 2026-03-10T11:52:54.389942+0000 mon.c (mon.1) 504 : audit [DBG] from='client.? 192.168.123.105:0/2536642709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:54 vm05 bash[65415]: audit 2026-03-10T11:52:54.389942+0000 mon.c (mon.1) 504 : audit [DBG] from='client.? 192.168.123.105:0/2536642709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: audit 2026-03-10T11:52:53.358350+0000 mgr.y (mgr.44107) 422 : audit [DBG] from='client.54714 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: audit 2026-03-10T11:52:53.358350+0000 mgr.y (mgr.44107) 422 : audit [DBG] from='client.54714 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: cluster 2026-03-10T11:52:53.487858+0000 mgr.y (mgr.44107) 423 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: cluster 2026-03-10T11:52:53.487858+0000 mgr.y (mgr.44107) 423 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: audit 2026-03-10T11:52:53.885017+0000 mgr.y (mgr.44107) 424 : audit [DBG] from='client.34643 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: audit 2026-03-10T11:52:53.885017+0000 mgr.y (mgr.44107) 424 : audit [DBG] from='client.34643 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: audit 2026-03-10T11:52:54.389942+0000 mon.c (mon.1) 504 : audit [DBG] from='client.? 192.168.123.105:0/2536642709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:55.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:54 vm05 bash[68966]: audit 2026-03-10T11:52:54.389942+0000 mon.c (mon.1) 504 : audit [DBG] from='client.? 192.168.123.105:0/2536642709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: audit 2026-03-10T11:52:53.358350+0000 mgr.y (mgr.44107) 422 : audit [DBG] from='client.54714 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: audit 2026-03-10T11:52:53.358350+0000 mgr.y (mgr.44107) 422 : audit [DBG] from='client.54714 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: cluster 2026-03-10T11:52:53.487858+0000 mgr.y (mgr.44107) 423 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: cluster 2026-03-10T11:52:53.487858+0000 mgr.y (mgr.44107) 423 : cluster [DBG] pgmap v212: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: audit 2026-03-10T11:52:53.885017+0000 mgr.y (mgr.44107) 424 : audit [DBG] from='client.34643 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: audit 2026-03-10T11:52:53.885017+0000 mgr.y (mgr.44107) 424 : audit [DBG] from='client.34643 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: audit 2026-03-10T11:52:54.389942+0000 mon.c (mon.1) 504 : audit [DBG] from='client.? 192.168.123.105:0/2536642709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:55.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:54 vm07 bash[46158]: audit 2026-03-10T11:52:54.389942+0000 mon.c (mon.1) 504 : audit [DBG] from='client.? 192.168.123.105:0/2536642709' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:52:56.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:55 vm05 bash[65415]: audit 2026-03-10T11:52:54.787992+0000 mgr.y (mgr.44107) 425 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:56.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:55 vm05 bash[65415]: audit 2026-03-10T11:52:54.787992+0000 mgr.y (mgr.44107) 425 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:56.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:55 vm05 bash[68966]: audit 2026-03-10T11:52:54.787992+0000 mgr.y (mgr.44107) 425 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:56.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:55 vm05 bash[68966]: audit 2026-03-10T11:52:54.787992+0000 mgr.y (mgr.44107) 425 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:56.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:55 vm07 bash[46158]: audit 2026-03-10T11:52:54.787992+0000 mgr.y (mgr.44107) 425 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:56.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:55 vm07 bash[46158]: audit 2026-03-10T11:52:54.787992+0000 mgr.y (mgr.44107) 425 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:52:57.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:56 vm05 bash[65415]: cluster 2026-03-10T11:52:55.488197+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:57.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:56 vm05 bash[65415]: cluster 2026-03-10T11:52:55.488197+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:57.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:56 vm05 bash[68966]: cluster 2026-03-10T11:52:55.488197+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:57.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:56 vm05 bash[68966]: cluster 2026-03-10T11:52:55.488197+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:57.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:56 vm07 bash[46158]: cluster 2026-03-10T11:52:55.488197+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:57.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:56 vm07 bash[46158]: cluster 2026-03-10T11:52:55.488197+0000 mgr.y (mgr.44107) 426 : cluster [DBG] pgmap v213: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:52:59.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:58 vm05 bash[65415]: cluster 2026-03-10T11:52:57.488554+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:59.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:52:58 vm05 bash[65415]: cluster 2026-03-10T11:52:57.488554+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:59.339 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:52:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:52:58] "GET /metrics HTTP/1.1" 200 37924 "" "Prometheus/2.51.0" 2026-03-10T11:52:59.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:58 vm05 bash[68966]: cluster 2026-03-10T11:52:57.488554+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:59.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:52:58 vm05 bash[68966]: cluster 2026-03-10T11:52:57.488554+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:59.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:58 vm07 bash[46158]: cluster 2026-03-10T11:52:57.488554+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:52:59.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:52:58 vm07 bash[46158]: cluster 2026-03-10T11:52:57.488554+0000 mgr.y (mgr.44107) 427 : cluster [DBG] pgmap v214: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:01.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:00 vm05 bash[65415]: cluster 2026-03-10T11:52:59.488858+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:01.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:00 vm05 bash[65415]: cluster 2026-03-10T11:52:59.488858+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:01.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:00 vm05 bash[68966]: cluster 2026-03-10T11:52:59.488858+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:01.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:00 vm05 bash[68966]: cluster 2026-03-10T11:52:59.488858+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:00 vm07 bash[46158]: cluster 2026-03-10T11:52:59.488858+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:01.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:00 vm07 bash[46158]: cluster 2026-03-10T11:52:59.488858+0000 mgr.y (mgr.44107) 428 : cluster [DBG] pgmap v215: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:03.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:02 vm05 bash[65415]: cluster 2026-03-10T11:53:01.489221+0000 mgr.y (mgr.44107) 429 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:03.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:02 vm05 bash[65415]: cluster 2026-03-10T11:53:01.489221+0000 mgr.y (mgr.44107) 429 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:03.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:02 vm05 bash[68966]: cluster 2026-03-10T11:53:01.489221+0000 mgr.y (mgr.44107) 429 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:03.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:02 vm05 bash[68966]: cluster 2026-03-10T11:53:01.489221+0000 mgr.y (mgr.44107) 429 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:03.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:02 vm07 bash[46158]: cluster 2026-03-10T11:53:01.489221+0000 mgr.y (mgr.44107) 429 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:03.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:02 vm07 bash[46158]: cluster 2026-03-10T11:53:01.489221+0000 mgr.y (mgr.44107) 429 : cluster [DBG] pgmap v216: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:05.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:05 vm05 bash[65415]: cluster 2026-03-10T11:53:03.489613+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:05.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:05 vm05 bash[65415]: cluster 2026-03-10T11:53:03.489613+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:05.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:05 vm05 bash[68966]: cluster 2026-03-10T11:53:03.489613+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:05.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:05 vm05 bash[68966]: cluster 2026-03-10T11:53:03.489613+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:05.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:05 vm07 bash[46158]: cluster 2026-03-10T11:53:03.489613+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:05.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:05 vm07 bash[46158]: cluster 2026-03-10T11:53:03.489613+0000 mgr.y (mgr.44107) 430 : cluster [DBG] pgmap v217: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:06 vm05 bash[65415]: audit 2026-03-10T11:53:04.798543+0000 mgr.y (mgr.44107) 431 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:06 vm05 bash[65415]: audit 2026-03-10T11:53:04.798543+0000 mgr.y (mgr.44107) 431 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:06 vm05 bash[65415]: audit 2026-03-10T11:53:05.496762+0000 mon.c (mon.1) 505 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:06 vm05 bash[65415]: audit 2026-03-10T11:53:05.496762+0000 mon.c (mon.1) 505 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:06 vm05 bash[68966]: audit 2026-03-10T11:53:04.798543+0000 mgr.y (mgr.44107) 431 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:06 vm05 bash[68966]: audit 2026-03-10T11:53:04.798543+0000 mgr.y (mgr.44107) 431 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:06 vm05 bash[68966]: audit 2026-03-10T11:53:05.496762+0000 mon.c (mon.1) 505 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:06.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:06 vm05 bash[68966]: audit 2026-03-10T11:53:05.496762+0000 mon.c (mon.1) 505 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:06 vm07 bash[46158]: audit 2026-03-10T11:53:04.798543+0000 mgr.y (mgr.44107) 431 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:06 vm07 bash[46158]: audit 2026-03-10T11:53:04.798543+0000 mgr.y (mgr.44107) 431 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:06 vm07 bash[46158]: audit 2026-03-10T11:53:05.496762+0000 mon.c (mon.1) 505 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:06 vm07 bash[46158]: audit 2026-03-10T11:53:05.496762+0000 mon.c (mon.1) 505 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:07.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:07 vm05 bash[65415]: cluster 2026-03-10T11:53:05.489947+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:07.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:07 vm05 bash[65415]: cluster 2026-03-10T11:53:05.489947+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:07.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:07 vm05 bash[68966]: cluster 2026-03-10T11:53:05.489947+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:07.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:07 vm05 bash[68966]: cluster 2026-03-10T11:53:05.489947+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:07.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:07 vm07 bash[46158]: cluster 2026-03-10T11:53:05.489947+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:07.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:07 vm07 bash[46158]: cluster 2026-03-10T11:53:05.489947+0000 mgr.y (mgr.44107) 432 : cluster [DBG] pgmap v218: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:09.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:09 vm05 bash[65415]: cluster 2026-03-10T11:53:07.490399+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:09.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:09 vm05 bash[65415]: cluster 2026-03-10T11:53:07.490399+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:09.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:09 vm05 bash[68966]: cluster 2026-03-10T11:53:07.490399+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:09.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:09 vm05 bash[68966]: cluster 2026-03-10T11:53:07.490399+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:09.339 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:53:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:53:08] "GET /metrics HTTP/1.1" 200 37923 "" "Prometheus/2.51.0" 2026-03-10T11:53:09.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:09 vm07 bash[46158]: cluster 2026-03-10T11:53:07.490399+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:09.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:09 vm07 bash[46158]: cluster 2026-03-10T11:53:07.490399+0000 mgr.y (mgr.44107) 433 : cluster [DBG] pgmap v219: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:11.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:11 vm05 bash[68966]: cluster 2026-03-10T11:53:09.490782+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:11.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:11 vm05 bash[68966]: cluster 2026-03-10T11:53:09.490782+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:11.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:11 vm05 bash[65415]: cluster 2026-03-10T11:53:09.490782+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:11.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:11 vm05 bash[65415]: cluster 2026-03-10T11:53:09.490782+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:11 vm07 bash[46158]: cluster 2026-03-10T11:53:09.490782+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:11.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:11 vm07 bash[46158]: cluster 2026-03-10T11:53:09.490782+0000 mgr.y (mgr.44107) 434 : cluster [DBG] pgmap v220: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:13.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:13 vm05 bash[65415]: cluster 2026-03-10T11:53:11.491207+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:13.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:13 vm05 bash[65415]: cluster 2026-03-10T11:53:11.491207+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:13.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:13 vm05 bash[68966]: cluster 2026-03-10T11:53:11.491207+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:13.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:13 vm05 bash[68966]: cluster 2026-03-10T11:53:11.491207+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:13.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:13 vm07 bash[46158]: cluster 2026-03-10T11:53:11.491207+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:13.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:13 vm07 bash[46158]: cluster 2026-03-10T11:53:11.491207+0000 mgr.y (mgr.44107) 435 : cluster [DBG] pgmap v221: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:15.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:15 vm05 bash[65415]: cluster 2026-03-10T11:53:13.491585+0000 mgr.y (mgr.44107) 436 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:15.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:15 vm05 bash[65415]: cluster 2026-03-10T11:53:13.491585+0000 mgr.y (mgr.44107) 436 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:15.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:15 vm05 bash[68966]: cluster 2026-03-10T11:53:13.491585+0000 mgr.y (mgr.44107) 436 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:15.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:15 vm05 bash[68966]: cluster 2026-03-10T11:53:13.491585+0000 mgr.y (mgr.44107) 436 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:15.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:15 vm07 bash[46158]: cluster 2026-03-10T11:53:13.491585+0000 mgr.y (mgr.44107) 436 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:15.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:15 vm07 bash[46158]: cluster 2026-03-10T11:53:13.491585+0000 mgr.y (mgr.44107) 436 : cluster [DBG] pgmap v222: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:16.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:16 vm05 bash[65415]: audit 2026-03-10T11:53:14.805526+0000 mgr.y (mgr.44107) 437 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:16.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:16 vm05 bash[65415]: audit 2026-03-10T11:53:14.805526+0000 mgr.y (mgr.44107) 437 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:16.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:16 vm05 bash[68966]: audit 2026-03-10T11:53:14.805526+0000 mgr.y (mgr.44107) 437 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:16.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:16 vm05 bash[68966]: audit 2026-03-10T11:53:14.805526+0000 mgr.y (mgr.44107) 437 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:16 vm07 bash[46158]: audit 2026-03-10T11:53:14.805526+0000 mgr.y (mgr.44107) 437 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:16.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:16 vm07 bash[46158]: audit 2026-03-10T11:53:14.805526+0000 mgr.y (mgr.44107) 437 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:17.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:17 vm05 bash[65415]: cluster 2026-03-10T11:53:15.492009+0000 mgr.y (mgr.44107) 438 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:17.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:17 vm05 bash[65415]: cluster 2026-03-10T11:53:15.492009+0000 mgr.y (mgr.44107) 438 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:17.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:17 vm05 bash[68966]: cluster 2026-03-10T11:53:15.492009+0000 mgr.y (mgr.44107) 438 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:17.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:17 vm05 bash[68966]: cluster 2026-03-10T11:53:15.492009+0000 mgr.y (mgr.44107) 438 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:17 vm07 bash[46158]: cluster 2026-03-10T11:53:15.492009+0000 mgr.y (mgr.44107) 438 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:17.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:17 vm07 bash[46158]: cluster 2026-03-10T11:53:15.492009+0000 mgr.y (mgr.44107) 438 : cluster [DBG] pgmap v223: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:19.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:19 vm05 bash[65415]: cluster 2026-03-10T11:53:17.492438+0000 mgr.y (mgr.44107) 439 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:19.339 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:19 vm05 bash[65415]: cluster 2026-03-10T11:53:17.492438+0000 mgr.y (mgr.44107) 439 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:19.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:19 vm05 bash[68966]: cluster 2026-03-10T11:53:17.492438+0000 mgr.y (mgr.44107) 439 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:19.339 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:19 vm05 bash[68966]: cluster 2026-03-10T11:53:17.492438+0000 mgr.y (mgr.44107) 439 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:19.339 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:53:18 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:53:18] "GET /metrics HTTP/1.1" 200 37923 "" "Prometheus/2.51.0" 2026-03-10T11:53:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:19 vm07 bash[46158]: cluster 2026-03-10T11:53:17.492438+0000 mgr.y (mgr.44107) 439 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:19.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:19 vm07 bash[46158]: cluster 2026-03-10T11:53:17.492438+0000 mgr.y (mgr.44107) 439 : cluster [DBG] pgmap v224: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:21 vm07 bash[46158]: cluster 2026-03-10T11:53:19.492844+0000 mgr.y (mgr.44107) 440 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:21 vm07 bash[46158]: cluster 2026-03-10T11:53:19.492844+0000 mgr.y (mgr.44107) 440 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:21 vm07 bash[46158]: audit 2026-03-10T11:53:20.497097+0000 mon.c (mon.1) 506 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:21.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:21 vm07 bash[46158]: audit 2026-03-10T11:53:20.497097+0000 mon.c (mon.1) 506 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:21 vm05 bash[65415]: cluster 2026-03-10T11:53:19.492844+0000 mgr.y (mgr.44107) 440 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:21 vm05 bash[65415]: cluster 2026-03-10T11:53:19.492844+0000 mgr.y (mgr.44107) 440 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:21 vm05 bash[65415]: audit 2026-03-10T11:53:20.497097+0000 mon.c (mon.1) 506 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:21 vm05 bash[65415]: audit 2026-03-10T11:53:20.497097+0000 mon.c (mon.1) 506 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:21 vm05 bash[68966]: cluster 2026-03-10T11:53:19.492844+0000 mgr.y (mgr.44107) 440 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:21 vm05 bash[68966]: cluster 2026-03-10T11:53:19.492844+0000 mgr.y (mgr.44107) 440 : cluster [DBG] pgmap v225: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:21 vm05 bash[68966]: audit 2026-03-10T11:53:20.497097+0000 mon.c (mon.1) 506 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:21.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:21 vm05 bash[68966]: audit 2026-03-10T11:53:20.497097+0000 mon.c (mon.1) 506 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:23 vm07 bash[46158]: cluster 2026-03-10T11:53:21.493296+0000 mgr.y (mgr.44107) 441 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:23.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:23 vm07 bash[46158]: cluster 2026-03-10T11:53:21.493296+0000 mgr.y (mgr.44107) 441 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:23.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:23 vm05 bash[65415]: cluster 2026-03-10T11:53:21.493296+0000 mgr.y (mgr.44107) 441 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:23.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:23 vm05 bash[65415]: cluster 2026-03-10T11:53:21.493296+0000 mgr.y (mgr.44107) 441 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:23.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:23 vm05 bash[68966]: cluster 2026-03-10T11:53:21.493296+0000 mgr.y (mgr.44107) 441 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:23.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:23 vm05 bash[68966]: cluster 2026-03-10T11:53:21.493296+0000 mgr.y (mgr.44107) 441 : cluster [DBG] pgmap v226: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:25 vm07 bash[46158]: cluster 2026-03-10T11:53:23.493706+0000 mgr.y (mgr.44107) 442 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:25.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:25 vm07 bash[46158]: cluster 2026-03-10T11:53:23.493706+0000 mgr.y (mgr.44107) 442 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:25 vm05 bash[68966]: cluster 2026-03-10T11:53:23.493706+0000 mgr.y (mgr.44107) 442 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:25.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:25 vm05 bash[68966]: cluster 2026-03-10T11:53:23.493706+0000 mgr.y (mgr.44107) 442 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:25.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:25 vm05 bash[65415]: cluster 2026-03-10T11:53:23.493706+0000 mgr.y (mgr.44107) 442 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:25.590 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:25 vm05 bash[65415]: cluster 2026-03-10T11:53:23.493706+0000 mgr.y (mgr.44107) 442 : cluster [DBG] pgmap v227: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:26.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:26 vm07 bash[46158]: audit 2026-03-10T11:53:24.809746+0000 mgr.y (mgr.44107) 443 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:26.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:26 vm07 bash[46158]: audit 2026-03-10T11:53:24.809746+0000 mgr.y (mgr.44107) 443 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:26.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:26 vm05 bash[65415]: audit 2026-03-10T11:53:24.809746+0000 mgr.y (mgr.44107) 443 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:26.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:26 vm05 bash[65415]: audit 2026-03-10T11:53:24.809746+0000 mgr.y (mgr.44107) 443 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:26.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:26 vm05 bash[68966]: audit 2026-03-10T11:53:24.809746+0000 mgr.y (mgr.44107) 443 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:26.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:26 vm05 bash[68966]: audit 2026-03-10T11:53:24.809746+0000 mgr.y (mgr.44107) 443 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:27 vm07 bash[46158]: cluster 2026-03-10T11:53:25.494085+0000 mgr.y (mgr.44107) 444 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:27.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:27 vm07 bash[46158]: cluster 2026-03-10T11:53:25.494085+0000 mgr.y (mgr.44107) 444 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:27.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:27 vm05 bash[65415]: cluster 2026-03-10T11:53:25.494085+0000 mgr.y (mgr.44107) 444 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:27.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:27 vm05 bash[65415]: cluster 2026-03-10T11:53:25.494085+0000 mgr.y (mgr.44107) 444 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:27 vm05 bash[68966]: cluster 2026-03-10T11:53:25.494085+0000 mgr.y (mgr.44107) 444 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:27.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:27 vm05 bash[68966]: cluster 2026-03-10T11:53:25.494085+0000 mgr.y (mgr.44107) 444 : cluster [DBG] pgmap v228: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:29.120 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:53:28 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:53:28] "GET /metrics HTTP/1.1" 200 37925 "" "Prometheus/2.51.0" 2026-03-10T11:53:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:29 vm07 bash[46158]: cluster 2026-03-10T11:53:27.494545+0000 mgr.y (mgr.44107) 445 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:29.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:29 vm07 bash[46158]: cluster 2026-03-10T11:53:27.494545+0000 mgr.y (mgr.44107) 445 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:29.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:29 vm05 bash[65415]: cluster 2026-03-10T11:53:27.494545+0000 mgr.y (mgr.44107) 445 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:29.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:29 vm05 bash[65415]: cluster 2026-03-10T11:53:27.494545+0000 mgr.y (mgr.44107) 445 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:29.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:29 vm05 bash[68966]: cluster 2026-03-10T11:53:27.494545+0000 mgr.y (mgr.44107) 445 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:29.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:29 vm05 bash[68966]: cluster 2026-03-10T11:53:27.494545+0000 mgr.y (mgr.44107) 445 : cluster [DBG] pgmap v229: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:31 vm07 bash[46158]: cluster 2026-03-10T11:53:29.494891+0000 mgr.y (mgr.44107) 446 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:31.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:31 vm07 bash[46158]: cluster 2026-03-10T11:53:29.494891+0000 mgr.y (mgr.44107) 446 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:31.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:31 vm05 bash[65415]: cluster 2026-03-10T11:53:29.494891+0000 mgr.y (mgr.44107) 446 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:31.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:31 vm05 bash[65415]: cluster 2026-03-10T11:53:29.494891+0000 mgr.y (mgr.44107) 446 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:31.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:31 vm05 bash[68966]: cluster 2026-03-10T11:53:29.494891+0000 mgr.y (mgr.44107) 446 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:31.590 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:31 vm05 bash[68966]: cluster 2026-03-10T11:53:29.494891+0000 mgr.y (mgr.44107) 446 : cluster [DBG] pgmap v230: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:33 vm07 bash[46158]: cluster 2026-03-10T11:53:31.495299+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:33.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:33 vm07 bash[46158]: cluster 2026-03-10T11:53:31.495299+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:33.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:33 vm05 bash[65415]: cluster 2026-03-10T11:53:31.495299+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:33.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:33 vm05 bash[65415]: cluster 2026-03-10T11:53:31.495299+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:33.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:33 vm05 bash[68966]: cluster 2026-03-10T11:53:31.495299+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:33.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:33 vm05 bash[68966]: cluster 2026-03-10T11:53:31.495299+0000 mgr.y (mgr.44107) 447 : cluster [DBG] pgmap v231: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:35 vm07 bash[46158]: cluster 2026-03-10T11:53:33.495692+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:35.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:35 vm07 bash[46158]: cluster 2026-03-10T11:53:33.495692+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:35.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:35 vm05 bash[65415]: cluster 2026-03-10T11:53:33.495692+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:35.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:35 vm05 bash[65415]: cluster 2026-03-10T11:53:33.495692+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:35.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:35 vm05 bash[68966]: cluster 2026-03-10T11:53:33.495692+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:35.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:35 vm05 bash[68966]: cluster 2026-03-10T11:53:33.495692+0000 mgr.y (mgr.44107) 448 : cluster [DBG] pgmap v232: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:36 vm07 bash[46158]: audit 2026-03-10T11:53:34.820335+0000 mgr.y (mgr.44107) 449 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:36 vm07 bash[46158]: audit 2026-03-10T11:53:34.820335+0000 mgr.y (mgr.44107) 449 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:36 vm07 bash[46158]: audit 2026-03-10T11:53:35.499837+0000 mon.c (mon.1) 507 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:36.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:36 vm07 bash[46158]: audit 2026-03-10T11:53:35.499837+0000 mon.c (mon.1) 507 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:36 vm05 bash[65415]: audit 2026-03-10T11:53:34.820335+0000 mgr.y (mgr.44107) 449 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:36 vm05 bash[65415]: audit 2026-03-10T11:53:34.820335+0000 mgr.y (mgr.44107) 449 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:36 vm05 bash[65415]: audit 2026-03-10T11:53:35.499837+0000 mon.c (mon.1) 507 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:36 vm05 bash[65415]: audit 2026-03-10T11:53:35.499837+0000 mon.c (mon.1) 507 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:36 vm05 bash[68966]: audit 2026-03-10T11:53:34.820335+0000 mgr.y (mgr.44107) 449 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:36 vm05 bash[68966]: audit 2026-03-10T11:53:34.820335+0000 mgr.y (mgr.44107) 449 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:36 vm05 bash[68966]: audit 2026-03-10T11:53:35.499837+0000 mon.c (mon.1) 507 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:36.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:36 vm05 bash[68966]: audit 2026-03-10T11:53:35.499837+0000 mon.c (mon.1) 507 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:37 vm07 bash[46158]: cluster 2026-03-10T11:53:35.496026+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:37.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:37 vm07 bash[46158]: cluster 2026-03-10T11:53:35.496026+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:37.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:37 vm05 bash[65415]: cluster 2026-03-10T11:53:35.496026+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:37.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:37 vm05 bash[65415]: cluster 2026-03-10T11:53:35.496026+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:37.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:37 vm05 bash[68966]: cluster 2026-03-10T11:53:35.496026+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:37.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:37 vm05 bash[68966]: cluster 2026-03-10T11:53:35.496026+0000 mgr.y (mgr.44107) 450 : cluster [DBG] pgmap v233: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:39.166 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:53:38 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:53:38] "GET /metrics HTTP/1.1" 200 37925 "" "Prometheus/2.51.0" 2026-03-10T11:53:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:39 vm07 bash[46158]: cluster 2026-03-10T11:53:37.496560+0000 mgr.y (mgr.44107) 451 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:39.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:39 vm07 bash[46158]: cluster 2026-03-10T11:53:37.496560+0000 mgr.y (mgr.44107) 451 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:39.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:39 vm05 bash[68966]: cluster 2026-03-10T11:53:37.496560+0000 mgr.y (mgr.44107) 451 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:39.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:39 vm05 bash[68966]: cluster 2026-03-10T11:53:37.496560+0000 mgr.y (mgr.44107) 451 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:39.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:39 vm05 bash[65415]: cluster 2026-03-10T11:53:37.496560+0000 mgr.y (mgr.44107) 451 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:39.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:39 vm05 bash[65415]: cluster 2026-03-10T11:53:37.496560+0000 mgr.y (mgr.44107) 451 : cluster [DBG] pgmap v234: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:40.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:40 vm05 bash[68966]: cluster 2026-03-10T11:53:39.496917+0000 mgr.y (mgr.44107) 452 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:40.589 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:40 vm05 bash[68966]: cluster 2026-03-10T11:53:39.496917+0000 mgr.y (mgr.44107) 452 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:40.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:40 vm05 bash[65415]: cluster 2026-03-10T11:53:39.496917+0000 mgr.y (mgr.44107) 452 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:40.589 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:40 vm05 bash[65415]: cluster 2026-03-10T11:53:39.496917+0000 mgr.y (mgr.44107) 452 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:40.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:40 vm07 bash[46158]: cluster 2026-03-10T11:53:39.496917+0000 mgr.y (mgr.44107) 452 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:40.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:40 vm07 bash[46158]: cluster 2026-03-10T11:53:39.496917+0000 mgr.y (mgr.44107) 452 : cluster [DBG] pgmap v235: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:42.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:42 vm05 bash[65415]: cluster 2026-03-10T11:53:41.497296+0000 mgr.y (mgr.44107) 453 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:42.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:42 vm05 bash[65415]: cluster 2026-03-10T11:53:41.497296+0000 mgr.y (mgr.44107) 453 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:42.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:42 vm05 bash[68966]: cluster 2026-03-10T11:53:41.497296+0000 mgr.y (mgr.44107) 453 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:42.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:42 vm05 bash[68966]: cluster 2026-03-10T11:53:41.497296+0000 mgr.y (mgr.44107) 453 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:42 vm07 bash[46158]: cluster 2026-03-10T11:53:41.497296+0000 mgr.y (mgr.44107) 453 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:42.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:42 vm07 bash[46158]: cluster 2026-03-10T11:53:41.497296+0000 mgr.y (mgr.44107) 453 : cluster [DBG] pgmap v236: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:44.816 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:44 vm05 bash[65415]: cluster 2026-03-10T11:53:43.497818+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:44.817 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:44 vm05 bash[65415]: cluster 2026-03-10T11:53:43.497818+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:44.817 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:44 vm05 bash[68966]: cluster 2026-03-10T11:53:43.497818+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:44.817 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:44 vm05 bash[68966]: cluster 2026-03-10T11:53:43.497818+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:44 vm07 bash[46158]: cluster 2026-03-10T11:53:43.497818+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:44.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:44 vm07 bash[46158]: cluster 2026-03-10T11:53:43.497818+0000 mgr.y (mgr.44107) 454 : cluster [DBG] pgmap v237: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:45.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:45 vm05 bash[65415]: audit 2026-03-10T11:53:44.822081+0000 mgr.y (mgr.44107) 455 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:45.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:45 vm05 bash[65415]: audit 2026-03-10T11:53:44.822081+0000 mgr.y (mgr.44107) 455 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:45.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:45 vm05 bash[68966]: audit 2026-03-10T11:53:44.822081+0000 mgr.y (mgr.44107) 455 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:45.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:45 vm05 bash[68966]: audit 2026-03-10T11:53:44.822081+0000 mgr.y (mgr.44107) 455 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:45 vm07 bash[46158]: audit 2026-03-10T11:53:44.822081+0000 mgr.y (mgr.44107) 455 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:45.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:45 vm07 bash[46158]: audit 2026-03-10T11:53:44.822081+0000 mgr.y (mgr.44107) 455 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:46.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: cluster 2026-03-10T11:53:45.498167+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:46.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: cluster 2026-03-10T11:53:45.498167+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:46.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.075648+0000 mon.c (mon.1) 508 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:53:46.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.075648+0000 mon.c (mon.1) 508 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:53:46.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.401459+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.401459+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.402416+0000 mon.c (mon.1) 510 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.402416+0000 mon.c (mon.1) 510 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.408782+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:46 vm05 bash[65415]: audit 2026-03-10T11:53:46.408782+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: cluster 2026-03-10T11:53:45.498167+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: cluster 2026-03-10T11:53:45.498167+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.075648+0000 mon.c (mon.1) 508 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.075648+0000 mon.c (mon.1) 508 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.401459+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.401459+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.402416+0000 mon.c (mon.1) 510 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.402416+0000 mon.c (mon.1) 510 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.408782+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:53:46.840 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:46 vm05 bash[68966]: audit 2026-03-10T11:53:46.408782+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: cluster 2026-03-10T11:53:45.498167+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: cluster 2026-03-10T11:53:45.498167+0000 mgr.y (mgr.44107) 456 : cluster [DBG] pgmap v238: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.075648+0000 mon.c (mon.1) 508 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.075648+0000 mon.c (mon.1) 508 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.401459+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.401459+0000 mon.c (mon.1) 509 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.402416+0000 mon.c (mon.1) 510 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.402416+0000 mon.c (mon.1) 510 : audit [INF] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.408782+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:53:46.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:46 vm07 bash[46158]: audit 2026-03-10T11:53:46.408782+0000 mon.a (mon.0) 700 : audit [INF] from='mgr.44107 ' entity='mgr.y' 2026-03-10T11:53:48.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:48 vm05 bash[68966]: cluster 2026-03-10T11:53:47.498676+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:48.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:48 vm05 bash[68966]: cluster 2026-03-10T11:53:47.498676+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:48.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:48 vm05 bash[65415]: cluster 2026-03-10T11:53:47.498676+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:48.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:48 vm05 bash[65415]: cluster 2026-03-10T11:53:47.498676+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:48.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:48 vm07 bash[46158]: cluster 2026-03-10T11:53:47.498676+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:48.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:48 vm07 bash[46158]: cluster 2026-03-10T11:53:47.498676+0000 mgr.y (mgr.44107) 457 : cluster [DBG] pgmap v239: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:49.339 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:53:48 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:53:48] "GET /metrics HTTP/1.1" 200 37925 "" "Prometheus/2.51.0" 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:50 vm05 bash[65415]: cluster 2026-03-10T11:53:49.499006+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:50 vm05 bash[65415]: cluster 2026-03-10T11:53:49.499006+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:50 vm05 bash[65415]: audit 2026-03-10T11:53:50.497392+0000 mon.c (mon.1) 511 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:50 vm05 bash[65415]: audit 2026-03-10T11:53:50.497392+0000 mon.c (mon.1) 511 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:50 vm05 bash[68966]: cluster 2026-03-10T11:53:49.499006+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:50 vm05 bash[68966]: cluster 2026-03-10T11:53:49.499006+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:50 vm05 bash[68966]: audit 2026-03-10T11:53:50.497392+0000 mon.c (mon.1) 511 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:50.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:50 vm05 bash[68966]: audit 2026-03-10T11:53:50.497392+0000 mon.c (mon.1) 511 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:50.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:50 vm07 bash[46158]: cluster 2026-03-10T11:53:49.499006+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:50.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:50 vm07 bash[46158]: cluster 2026-03-10T11:53:49.499006+0000 mgr.y (mgr.44107) 458 : cluster [DBG] pgmap v240: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:50.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:50 vm07 bash[46158]: audit 2026-03-10T11:53:50.497392+0000 mon.c (mon.1) 511 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:50.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:50 vm07 bash[46158]: audit 2026-03-10T11:53:50.497392+0000 mon.c (mon.1) 511 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:53:52.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:52 vm07 bash[46158]: cluster 2026-03-10T11:53:51.499372+0000 mgr.y (mgr.44107) 459 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:52.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:52 vm07 bash[46158]: cluster 2026-03-10T11:53:51.499372+0000 mgr.y (mgr.44107) 459 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:53.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:52 vm05 bash[65415]: cluster 2026-03-10T11:53:51.499372+0000 mgr.y (mgr.44107) 459 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:53.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:52 vm05 bash[65415]: cluster 2026-03-10T11:53:51.499372+0000 mgr.y (mgr.44107) 459 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:53.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:52 vm05 bash[68966]: cluster 2026-03-10T11:53:51.499372+0000 mgr.y (mgr.44107) 459 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:53.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:52 vm05 bash[68966]: cluster 2026-03-10T11:53:51.499372+0000 mgr.y (mgr.44107) 459 : cluster [DBG] pgmap v241: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:54.925 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:54 vm05 bash[68966]: cluster 2026-03-10T11:53:53.499764+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:54.925 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:54 vm05 bash[68966]: cluster 2026-03-10T11:53:53.499764+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:54.925 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:54 vm05 bash[65415]: cluster 2026-03-10T11:53:53.499764+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:54.925 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:54 vm05 bash[65415]: cluster 2026-03-10T11:53:53.499764+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:54.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:54 vm07 bash[46158]: cluster 2026-03-10T11:53:53.499764+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:54.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:54 vm07 bash[46158]: cluster 2026-03-10T11:53:53.499764+0000 mgr.y (mgr.44107) 460 : cluster [DBG] pgmap v242: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:54.979 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ps' 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:alertmanager.a vm05 *:9093,9094 running (20m) 75s ago 27m 14.5M - 0.25.0 c8568f914cd2 6fd99810a680 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:grafana.a vm07 *:3000 running (7m) 118s ago 27m 66.9M - 10.4.0 c8b91775d855 3d10fa6a70a7 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:iscsi.foo.vm05.txapnk vm05 running (81s) 75s ago 26m 75.9M - 3.9 654f31e6858e 82cca4edc1e9 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:mgr.x vm07 *:8443,9283,8765 running (8m) 118s ago 29m 468M - 19.2.3-678-ge911bdeb 654f31e6858e 80520a82076d 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:mgr.y vm05 *:8443,9283,8765 running (17m) 75s ago 30m 539M - 19.2.3-678-ge911bdeb 654f31e6858e 1ac2f649933c 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:mon.a vm05 running (6m) 75s ago 30m 58.7M 2048M 19.2.3-678-ge911bdeb 654f31e6858e c4e98aeb2612 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:mon.b vm07 running (7m) 118s ago 30m 50.0M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e6a69c3d3376 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:mon.c vm05 running (6m) 75s ago 30m 53.6M 2048M 19.2.3-678-ge911bdeb 654f31e6858e 7ecd929b1534 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.a vm05 *:9100 running (20m) 75s ago 27m 8143k - 1.7.0 72c9c2088986 d4b69c85984a 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:node-exporter.b vm07 *:9100 running (20m) 118s ago 27m 7919k - 1.7.0 72c9c2088986 33ca1c822db8 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:osd.0 vm05 running (4m) 75s ago 29m 55.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 99f8081fc675 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:osd.1 vm05 running (3m) 75s ago 29m 53.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e c8c6d1f8db09 2026-03-10T11:53:55.442 INFO:teuthology.orchestra.run.vm05.stdout:osd.2 vm05 running (5m) 75s ago 29m 51.1M 4096M 19.2.3-678-ge911bdeb 654f31e6858e d776d9c09a7b 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:osd.3 vm05 running (5m) 75s ago 29m 75.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f720fd6bd8d2 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:osd.4 vm07 running (3m) 118s ago 28m 53.3M 4096M 19.2.3-678-ge911bdeb 654f31e6858e f48f9737e97e 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:osd.5 vm07 running (3m) 118s ago 28m 49.4M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 4b51ce79d374 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:osd.6 vm07 running (2m) 118s ago 28m 68.7M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8db64879085d 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:osd.7 vm07 running (2m) 118s ago 27m 69.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e e86e1860ea0d 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:prometheus.a vm07 *:9095 running (8m) 118s ago 27m 45.5M - 2.51.0 1d3b7f56885b d5a9b80fa8a4 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm05.fdjkgz vm05 *:8000 running (2m) 75s ago 26m 91.4M - 19.2.3-678-ge911bdeb 654f31e6858e 41b55296180b 2026-03-10T11:53:55.443 INFO:teuthology.orchestra.run.vm05.stdout:rgw.foo.vm07.mbukmh vm07 *:8000 running (2m) 118s ago 26m 90.9M - 19.2.3-678-ge911bdeb 654f31e6858e 8f8f41b99bda 2026-03-10T11:53:55.495 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions' 2026-03-10T11:53:55.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:55 vm05 bash[65415]: audit 2026-03-10T11:53:54.823923+0000 mgr.y (mgr.44107) 461 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:55.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:55 vm05 bash[65415]: audit 2026-03-10T11:53:54.823923+0000 mgr.y (mgr.44107) 461 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:55.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:55 vm05 bash[68966]: audit 2026-03-10T11:53:54.823923+0000 mgr.y (mgr.44107) 461 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:55.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:55 vm05 bash[68966]: audit 2026-03-10T11:53:54.823923+0000 mgr.y (mgr.44107) 461 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:55.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:55 vm07 bash[46158]: audit 2026-03-10T11:53:54.823923+0000 mgr.y (mgr.44107) 461 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:55.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:55 vm07 bash[46158]: audit 2026-03-10T11:53:54.823923+0000 mgr.y (mgr.44107) 461 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "mon": { 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 3 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "mgr": { 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "osd": { 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 8 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "rgw": { 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 2 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: }, 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "overall": { 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)": 15 2026-03-10T11:53:55.990 INFO:teuthology.orchestra.run.vm05.stdout: } 2026-03-10T11:53:55.991 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:53:56.041 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch upgrade status' 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "target_image": null, 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "in_progress": false, 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "which": "", 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "services_complete": [], 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "progress": null, 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "message": "", 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout: "is_paused": false 2026-03-10T11:53:56.464 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:53:56.510 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph health detail' 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:56 vm05 bash[65415]: audit 2026-03-10T11:53:55.443819+0000 mgr.y (mgr.44107) 462 : audit [DBG] from='client.54732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:56 vm05 bash[65415]: audit 2026-03-10T11:53:55.443819+0000 mgr.y (mgr.44107) 462 : audit [DBG] from='client.54732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:56 vm05 bash[65415]: cluster 2026-03-10T11:53:55.500519+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:56 vm05 bash[65415]: cluster 2026-03-10T11:53:55.500519+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:56 vm05 bash[65415]: audit 2026-03-10T11:53:55.995916+0000 mon.a (mon.0) 701 : audit [DBG] from='client.? 192.168.123.105:0/642916193' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:56 vm05 bash[65415]: audit 2026-03-10T11:53:55.995916+0000 mon.a (mon.0) 701 : audit [DBG] from='client.? 192.168.123.105:0/642916193' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:56 vm05 bash[68966]: audit 2026-03-10T11:53:55.443819+0000 mgr.y (mgr.44107) 462 : audit [DBG] from='client.54732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:56 vm05 bash[68966]: audit 2026-03-10T11:53:55.443819+0000 mgr.y (mgr.44107) 462 : audit [DBG] from='client.54732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:56 vm05 bash[68966]: cluster 2026-03-10T11:53:55.500519+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:56 vm05 bash[68966]: cluster 2026-03-10T11:53:55.500519+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:56 vm05 bash[68966]: audit 2026-03-10T11:53:55.995916+0000 mon.a (mon.0) 701 : audit [DBG] from='client.? 192.168.123.105:0/642916193' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:56.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:56 vm05 bash[68966]: audit 2026-03-10T11:53:55.995916+0000 mon.a (mon.0) 701 : audit [DBG] from='client.? 192.168.123.105:0/642916193' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:56.936 INFO:teuthology.orchestra.run.vm05.stdout:HEALTH_OK 2026-03-10T11:53:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:56 vm07 bash[46158]: audit 2026-03-10T11:53:55.443819+0000 mgr.y (mgr.44107) 462 : audit [DBG] from='client.54732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:56 vm07 bash[46158]: audit 2026-03-10T11:53:55.443819+0000 mgr.y (mgr.44107) 462 : audit [DBG] from='client.54732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:56 vm07 bash[46158]: cluster 2026-03-10T11:53:55.500519+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:56 vm07 bash[46158]: cluster 2026-03-10T11:53:55.500519+0000 mgr.y (mgr.44107) 463 : cluster [DBG] pgmap v243: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:53:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:56 vm07 bash[46158]: audit 2026-03-10T11:53:55.995916+0000 mon.a (mon.0) 701 : audit [DBG] from='client.? 192.168.123.105:0/642916193' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:56.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:56 vm07 bash[46158]: audit 2026-03-10T11:53:55.995916+0000 mon.a (mon.0) 701 : audit [DBG] from='client.? 192.168.123.105:0/642916193' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:56.984 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | length == 1'"'"'' 2026-03-10T11:53:57.427 INFO:teuthology.orchestra.run.vm05.stdout:true 2026-03-10T11:53:57.463 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph versions | jq -e '"'"'.overall | keys'"'"' | grep $sha1' 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:57 vm05 bash[65415]: audit 2026-03-10T11:53:56.469524+0000 mgr.y (mgr.44107) 464 : audit [DBG] from='client.54741 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:57 vm05 bash[65415]: audit 2026-03-10T11:53:56.469524+0000 mgr.y (mgr.44107) 464 : audit [DBG] from='client.54741 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:57 vm05 bash[65415]: audit 2026-03-10T11:53:56.941712+0000 mon.a (mon.0) 702 : audit [DBG] from='client.? 192.168.123.105:0/2401202530' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:57 vm05 bash[65415]: audit 2026-03-10T11:53:56.941712+0000 mon.a (mon.0) 702 : audit [DBG] from='client.? 192.168.123.105:0/2401202530' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:57 vm05 bash[65415]: audit 2026-03-10T11:53:57.418974+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.105:0/920592421' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:57 vm05 bash[65415]: audit 2026-03-10T11:53:57.418974+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.105:0/920592421' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:57 vm05 bash[68966]: audit 2026-03-10T11:53:56.469524+0000 mgr.y (mgr.44107) 464 : audit [DBG] from='client.54741 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:57 vm05 bash[68966]: audit 2026-03-10T11:53:56.469524+0000 mgr.y (mgr.44107) 464 : audit [DBG] from='client.54741 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:57 vm05 bash[68966]: audit 2026-03-10T11:53:56.941712+0000 mon.a (mon.0) 702 : audit [DBG] from='client.? 192.168.123.105:0/2401202530' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:57 vm05 bash[68966]: audit 2026-03-10T11:53:56.941712+0000 mon.a (mon.0) 702 : audit [DBG] from='client.? 192.168.123.105:0/2401202530' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:57 vm05 bash[68966]: audit 2026-03-10T11:53:57.418974+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.105:0/920592421' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:57.681 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:57 vm05 bash[68966]: audit 2026-03-10T11:53:57.418974+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.105:0/920592421' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:57.933 INFO:teuthology.orchestra.run.vm05.stdout: "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)" 2026-03-10T11:53:57.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:57 vm07 bash[46158]: audit 2026-03-10T11:53:56.469524+0000 mgr.y (mgr.44107) 464 : audit [DBG] from='client.54741 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:57.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:57 vm07 bash[46158]: audit 2026-03-10T11:53:56.469524+0000 mgr.y (mgr.44107) 464 : audit [DBG] from='client.54741 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:57.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:57 vm07 bash[46158]: audit 2026-03-10T11:53:56.941712+0000 mon.a (mon.0) 702 : audit [DBG] from='client.? 192.168.123.105:0/2401202530' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:53:57.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:57 vm07 bash[46158]: audit 2026-03-10T11:53:56.941712+0000 mon.a (mon.0) 702 : audit [DBG] from='client.? 192.168.123.105:0/2401202530' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch 2026-03-10T11:53:57.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:57 vm07 bash[46158]: audit 2026-03-10T11:53:57.418974+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.105:0/920592421' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:57.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:57 vm07 bash[46158]: audit 2026-03-10T11:53:57.418974+0000 mon.b (mon.2) 38 : audit [DBG] from='client.? 192.168.123.105:0/920592421' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:57.975 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -e sha1=e911bdebe5c8faa3800735d1568fcdca65db60df -- bash -c 'ceph orch ls | grep '"'"'^osd '"'"'' 2026-03-10T11:53:58.405 INFO:teuthology.orchestra.run.vm05.stdout:osd 8 2m ago - 2026-03-10T11:53:58.468 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-10T11:53:58.470 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm05.local 2026-03-10T11:53:58.471 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- bash -c 'ceph orch upgrade ls' 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:58 vm05 bash[65415]: cluster 2026-03-10T11:53:57.500948+0000 mgr.y (mgr.44107) 465 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:58 vm05 bash[65415]: cluster 2026-03-10T11:53:57.500948+0000 mgr.y (mgr.44107) 465 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:58 vm05 bash[65415]: audit 2026-03-10T11:53:57.927614+0000 mon.c (mon.1) 512 : audit [DBG] from='client.? 192.168.123.105:0/3720129370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:58 vm05 bash[65415]: audit 2026-03-10T11:53:57.927614+0000 mon.c (mon.1) 512 : audit [DBG] from='client.? 192.168.123.105:0/3720129370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:58 vm05 bash[68966]: cluster 2026-03-10T11:53:57.500948+0000 mgr.y (mgr.44107) 465 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:58 vm05 bash[68966]: cluster 2026-03-10T11:53:57.500948+0000 mgr.y (mgr.44107) 465 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:58 vm05 bash[68966]: audit 2026-03-10T11:53:57.927614+0000 mon.c (mon.1) 512 : audit [DBG] from='client.? 192.168.123.105:0/3720129370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:58.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:58 vm05 bash[68966]: audit 2026-03-10T11:53:57.927614+0000 mon.c (mon.1) 512 : audit [DBG] from='client.? 192.168.123.105:0/3720129370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:58 vm07 bash[46158]: cluster 2026-03-10T11:53:57.500948+0000 mgr.y (mgr.44107) 465 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:58 vm07 bash[46158]: cluster 2026-03-10T11:53:57.500948+0000 mgr.y (mgr.44107) 465 : cluster [DBG] pgmap v244: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:53:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:58 vm07 bash[46158]: audit 2026-03-10T11:53:57.927614+0000 mon.c (mon.1) 512 : audit [DBG] from='client.? 192.168.123.105:0/3720129370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:58.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:58 vm07 bash[46158]: audit 2026-03-10T11:53:57.927614+0000 mon.c (mon.1) 512 : audit [DBG] from='client.? 192.168.123.105:0/3720129370' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch 2026-03-10T11:53:59.089 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:53:58 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:53:58] "GET /metrics HTTP/1.1" 200 37922 "" "Prometheus/2.51.0" 2026-03-10T11:53:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:59 vm07 bash[46158]: audit 2026-03-10T11:53:58.397614+0000 mgr.y (mgr.44107) 466 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:59 vm07 bash[46158]: audit 2026-03-10T11:53:58.397614+0000 mgr.y (mgr.44107) 466 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:59 vm07 bash[46158]: audit 2026-03-10T11:53:58.922659+0000 mgr.y (mgr.44107) 467 : audit [DBG] from='client.34676 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:53:59.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:53:59 vm07 bash[46158]: audit 2026-03-10T11:53:58.922659+0000 mgr.y (mgr.44107) 467 : audit [DBG] from='client.34676 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:59 vm05 bash[65415]: audit 2026-03-10T11:53:58.397614+0000 mgr.y (mgr.44107) 466 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:59 vm05 bash[65415]: audit 2026-03-10T11:53:58.397614+0000 mgr.y (mgr.44107) 466 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:59 vm05 bash[65415]: audit 2026-03-10T11:53:58.922659+0000 mgr.y (mgr.44107) 467 : audit [DBG] from='client.34676 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:53:59 vm05 bash[65415]: audit 2026-03-10T11:53:58.922659+0000 mgr.y (mgr.44107) 467 : audit [DBG] from='client.34676 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:59 vm05 bash[68966]: audit 2026-03-10T11:53:58.397614+0000 mgr.y (mgr.44107) 466 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:59 vm05 bash[68966]: audit 2026-03-10T11:53:58.397614+0000 mgr.y (mgr.44107) 466 : audit [DBG] from='client.54759 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:59 vm05 bash[68966]: audit 2026-03-10T11:53:58.922659+0000 mgr.y (mgr.44107) 467 : audit [DBG] from='client.34676 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:53:59 vm05 bash[68966]: audit 2026-03-10T11:53:58.922659+0000 mgr.y (mgr.44107) 467 : audit [DBG] from='client.34676 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout:{ 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "image": "quay.io/ceph/ceph", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "registry": "quay.io", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "bare_image": "ceph/ceph", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "versions": [ 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "20.2.0", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "20.1.1", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "20.1.0", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "19.2.3", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "19.2.2", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "19.2.1", 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: "19.2.0" 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout: ] 2026-03-10T11:54:00.325 INFO:teuthology.orchestra.run.vm05.stdout:} 2026-03-10T11:54:00.396 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --show-all-versions | grep 16.2.0' 2026-03-10T11:54:00.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:00 vm05 bash[65415]: cluster 2026-03-10T11:53:59.501307+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:00.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:00 vm05 bash[65415]: cluster 2026-03-10T11:53:59.501307+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:00.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:00 vm05 bash[68966]: cluster 2026-03-10T11:53:59.501307+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:00.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:00 vm05 bash[68966]: cluster 2026-03-10T11:53:59.501307+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:00 vm07 bash[46158]: cluster 2026-03-10T11:53:59.501307+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:00.946 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:00 vm07 bash[46158]: cluster 2026-03-10T11:53:59.501307+0000 mgr.y (mgr.44107) 468 : cluster [DBG] pgmap v245: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:01.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:01 vm07 bash[46158]: audit 2026-03-10T11:54:00.823710+0000 mgr.y (mgr.44107) 469 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:01.947 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:01 vm07 bash[46158]: audit 2026-03-10T11:54:00.823710+0000 mgr.y (mgr.44107) 469 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:02.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:01 vm05 bash[65415]: audit 2026-03-10T11:54:00.823710+0000 mgr.y (mgr.44107) 469 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:02.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:01 vm05 bash[65415]: audit 2026-03-10T11:54:00.823710+0000 mgr.y (mgr.44107) 469 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:02.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:01 vm05 bash[68966]: audit 2026-03-10T11:54:00.823710+0000 mgr.y (mgr.44107) 469 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:02.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:01 vm05 bash[68966]: audit 2026-03-10T11:54:00.823710+0000 mgr.y (mgr.44107) 469 : audit [DBG] from='client.54771 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "show_all_versions": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:02.209 INFO:teuthology.orchestra.run.vm05.stdout: "16.2.0", 2026-03-10T11:54:02.445 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- bash -c 'ceph orch upgrade ls --image quay.io/ceph/ceph --tags | grep v16.2.2' 2026-03-10T11:54:02.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:02 vm05 bash[65415]: cluster 2026-03-10T11:54:01.501700+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:02.839 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:02 vm05 bash[65415]: cluster 2026-03-10T11:54:01.501700+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:02.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:02 vm05 bash[68966]: cluster 2026-03-10T11:54:01.501700+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:02.839 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:02 vm05 bash[68966]: cluster 2026-03-10T11:54:01.501700+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:03.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:02 vm07 bash[46158]: cluster 2026-03-10T11:54:01.501700+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:03.197 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:02 vm07 bash[46158]: cluster 2026-03-10T11:54:01.501700+0000 mgr.y (mgr.44107) 470 : cluster [DBG] pgmap v246: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:04 vm05 bash[65415]: cluster 2026-03-10T11:54:03.502506+0000 mgr.y (mgr.44107) 471 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:04 vm05 bash[65415]: cluster 2026-03-10T11:54:03.502506+0000 mgr.y (mgr.44107) 471 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:04 vm05 bash[65415]: audit 2026-03-10T11:54:03.891094+0000 mgr.y (mgr.44107) 472 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:04 vm05 bash[65415]: audit 2026-03-10T11:54:03.891094+0000 mgr.y (mgr.44107) 472 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:04 vm05 bash[68966]: cluster 2026-03-10T11:54:03.502506+0000 mgr.y (mgr.44107) 471 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:04 vm05 bash[68966]: cluster 2026-03-10T11:54:03.502506+0000 mgr.y (mgr.44107) 471 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:04 vm05 bash[68966]: audit 2026-03-10T11:54:03.891094+0000 mgr.y (mgr.44107) 472 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:05.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:04 vm05 bash[68966]: audit 2026-03-10T11:54:03.891094+0000 mgr.y (mgr.44107) 472 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:04 vm07 bash[46158]: cluster 2026-03-10T11:54:03.502506+0000 mgr.y (mgr.44107) 471 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:04 vm07 bash[46158]: cluster 2026-03-10T11:54:03.502506+0000 mgr.y (mgr.44107) 471 : cluster [DBG] pgmap v247: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-10T11:54:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:04 vm07 bash[46158]: audit 2026-03-10T11:54:03.891094+0000 mgr.y (mgr.44107) 472 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:05.196 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:04 vm07 bash[46158]: audit 2026-03-10T11:54:03.891094+0000 mgr.y (mgr.44107) 472 : audit [DBG] from='client.54777 -' entity='client.admin' cmd=[{"prefix": "orch upgrade ls", "image": "quay.io/ceph/ceph", "tags": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-10T11:54:05.263 INFO:teuthology.orchestra.run.vm05.stdout: "v16.2.2", 2026-03-10T11:54:05.263 INFO:teuthology.orchestra.run.vm05.stdout: "v16.2.2-20210505", 2026-03-10T11:54:05.513 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-10T11:54:05.533 INFO:tasks.cephadm:Teardown begin 2026-03-10T11:54:05.533 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:54:05.542 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:54:05.573 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-10T11:54:05.573 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph/ceph:v17.2.0 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d -- ceph mgr module disable cephadm 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:05 vm05 bash[65415]: audit 2026-03-10T11:54:04.828601+0000 mgr.y (mgr.44107) 473 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:05 vm05 bash[65415]: audit 2026-03-10T11:54:04.828601+0000 mgr.y (mgr.44107) 473 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:05 vm05 bash[65415]: audit 2026-03-10T11:54:05.500174+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:05 vm05 bash[65415]: audit 2026-03-10T11:54:05.500174+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:05 vm05 bash[68966]: audit 2026-03-10T11:54:04.828601+0000 mgr.y (mgr.44107) 473 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:05 vm05 bash[68966]: audit 2026-03-10T11:54:04.828601+0000 mgr.y (mgr.44107) 473 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:54:06.089 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:05 vm05 bash[68966]: audit 2026-03-10T11:54:05.500174+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:54:06.090 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:05 vm05 bash[68966]: audit 2026-03-10T11:54:05.500174+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:54:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:05 vm07 bash[46158]: audit 2026-03-10T11:54:04.828601+0000 mgr.y (mgr.44107) 473 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:54:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:05 vm07 bash[46158]: audit 2026-03-10T11:54:04.828601+0000 mgr.y (mgr.44107) 473 : audit [DBG] from='client.54669 -' entity='client.iscsi.foo.vm05.txapnk' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-10T11:54:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:05 vm07 bash[46158]: audit 2026-03-10T11:54:05.500174+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:54:06.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:05 vm07 bash[46158]: audit 2026-03-10T11:54:05.500174+0000 mon.c (mon.1) 513 : audit [DBG] from='mgr.44107 192.168.123.105:0/163307100' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-10T11:54:07.113 INFO:teuthology.orchestra.run.vm05.stderr:Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',) 2026-03-10T11:54:07.235 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:54:07.235 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-10T11:54:07.235 DEBUG:teuthology.orchestra.run.vm05:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T11:54:07.238 DEBUG:teuthology.orchestra.run.vm07:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-10T11:54:07.241 INFO:tasks.cephadm:Stopping all daemons... 2026-03-10T11:54:07.241 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-10T11:54:07.241 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:07 vm05 bash[65415]: cluster 2026-03-10T11:54:05.508669+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:07 vm05 bash[65415]: cluster 2026-03-10T11:54:05.508669+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:07 vm05 bash[68966]: cluster 2026-03-10T11:54:05.508669+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:07 vm05 bash[68966]: cluster 2026-03-10T11:54:05.508669+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:07 vm05 systemd[1]: Stopping Ceph mon.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:07 vm05 bash[68966]: debug 2026-03-10T11:54:07.338+0000 7f930836f640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:54:07.402 INFO:journalctl@ceph.mon.a.vm05.stdout:Mar 10 11:54:07 vm05 bash[68966]: debug 2026-03-10T11:54:07.338+0000 7f930836f640 -1 mon.a@0(leader) e4 *** Got Signal Terminated *** 2026-03-10T11:54:07.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:07 vm07 bash[46158]: cluster 2026-03-10T11:54:05.508669+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:07.446 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:07 vm07 bash[46158]: cluster 2026-03-10T11:54:05.508669+0000 mgr.y (mgr.44107) 474 : cluster [DBG] pgmap v248: 161 pgs: 161 active+clean; 457 KiB data, 288 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-10T11:54:07.644 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.a.service' 2026-03-10T11:54:07.659 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:07.659 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-10T11:54:07.659 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-10T11:54:07.659 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.c 2026-03-10T11:54:07.896 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:07 vm05 systemd[1]: Stopping Ceph mon.c for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:07.896 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:07 vm05 bash[65415]: debug 2026-03-10T11:54:07.870+0000 7fb3c92e4640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:54:07.896 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:07 vm05 bash[65415]: debug 2026-03-10T11:54:07.870+0000 7fb3c92e4640 -1 mon.c@1(peon) e4 *** Got Signal Terminated *** 2026-03-10T11:54:08.089 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:07 vm05 bash[53899]: [10/Mar/2026:11:54:07] ENGINE Bus STOPPING 2026-03-10T11:54:08.372 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: [10/Mar/2026:11:54:08] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-10T11:54:08.372 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: [10/Mar/2026:11:54:08] ENGINE Bus STOPPED 2026-03-10T11:54:08.372 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: [10/Mar/2026:11:54:08] ENGINE Bus STARTING 2026-03-10T11:54:08.373 INFO:journalctl@ceph.mon.c.vm05.stdout:Mar 10 11:54:08 vm05 bash[111473]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mon-c 2026-03-10T11:54:08.376 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.c.service' 2026-03-10T11:54:08.387 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:08.387 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-10T11:54:08.387 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-10T11:54:08.387 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.b 2026-03-10T11:54:08.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:08 vm07 systemd[1]: Stopping Ceph mon.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:08.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:08 vm07 bash[46158]: debug 2026-03-10T11:54:08.466+0000 7f825c72e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-10T11:54:08.696 INFO:journalctl@ceph.mon.b.vm07.stdout:Mar 10 11:54:08 vm07 bash[46158]: debug 2026-03-10T11:54:08.466+0000 7f825c72e640 -1 mon.b@2(peon) e4 *** Got Signal Terminated *** 2026-03-10T11:54:08.836 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mon.b.service' 2026-03-10T11:54:08.839 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: [10/Mar/2026:11:54:08] ENGINE Serving on http://:::9283 2026-03-10T11:54:08.848 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: [10/Mar/2026:11:54:08] ENGINE Bus STARTED 2026-03-10T11:54:08.848 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:08.849 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-10T11:54:08.849 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-10T11:54:08.849 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y 2026-03-10T11:54:09.165 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: ::ffff:192.168.123.107 - - [10/Mar/2026:11:54:08] "GET /metrics HTTP/1.1" 200 37921 "" "Prometheus/2.51.0" 2026-03-10T11:54:09.165 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 systemd[1]: Stopping Ceph mgr.y for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:09.165 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: debug 2026-03-10T11:54:08.982+0000 7f2790ade640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mgr -n mgr.y -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:09.165 INFO:journalctl@ceph.mgr.y.vm05.stdout:Mar 10 11:54:08 vm05 bash[53899]: debug 2026-03-10T11:54:08.982+0000 7f2790ade640 -1 mgr handle_mgr_signal *** Got signal Terminated *** 2026-03-10T11:54:09.253 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.y.service' 2026-03-10T11:54:09.264 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:09.265 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-10T11:54:09.265 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-10T11:54:09.265 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x 2026-03-10T11:54:09.590 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:54:09 vm07 systemd[1]: Stopping Ceph mgr.x for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:09.590 INFO:journalctl@ceph.mgr.x.vm07.stdout:Mar 10 11:54:09 vm07 bash[77104]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-mgr-x 2026-03-10T11:54:09.601 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@mgr.x.service' 2026-03-10T11:54:09.614 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:09.614 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-10T11:54:09.614 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-10T11:54:09.614 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.0 2026-03-10T11:54:10.089 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:54:09 vm05 systemd[1]: Stopping Ceph osd.0 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:10.089 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:54:09 vm05 bash[86636]: debug 2026-03-10T11:54:09.714+0000 7fa543085640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:10.089 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:54:09 vm05 bash[86636]: debug 2026-03-10T11:54:09.714+0000 7fa543085640 -1 osd.0 149 *** Got signal Terminated *** 2026-03-10T11:54:10.089 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:54:09 vm05 bash[86636]: debug 2026-03-10T11:54:09.714+0000 7fa543085640 -1 osd.0 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:15.020 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:54:14 vm05 bash[111660]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-0 2026-03-10T11:54:15.308 INFO:journalctl@ceph.osd.0.vm05.stdout:Mar 10 11:54:15 vm05 bash[111720]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-0 2026-03-10T11:54:15.805 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.0.service' 2026-03-10T11:54:15.816 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:15.821 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-10T11:54:15.822 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-10T11:54:15.822 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.1 2026-03-10T11:54:16.089 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:54:15 vm05 systemd[1]: Stopping Ceph osd.1 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:16.089 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:54:16 vm05 bash[93177]: debug 2026-03-10T11:54:16.002+0000 7f21ff61d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:16.089 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:54:16 vm05 bash[93177]: debug 2026-03-10T11:54:16.002+0000 7f21ff61d640 -1 osd.1 149 *** Got signal Terminated *** 2026-03-10T11:54:16.089 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:54:16 vm05 bash[93177]: debug 2026-03-10T11:54:16.002+0000 7f21ff61d640 -1 osd.1 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:21.339 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:54:21 vm05 bash[111841]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-1 2026-03-10T11:54:21.772 INFO:journalctl@ceph.osd.1.vm05.stdout:Mar 10 11:54:21 vm05 bash[111913]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-1 2026-03-10T11:54:22.351 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.1.service' 2026-03-10T11:54:22.365 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:22.365 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-10T11:54:22.365 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-10T11:54:22.365 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.2 2026-03-10T11:54:22.839 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:54:22 vm05 systemd[1]: Stopping Ceph osd.2 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:22.839 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:54:22 vm05 bash[80388]: debug 2026-03-10T11:54:22.546+0000 7f3739540640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:22.839 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:54:22 vm05 bash[80388]: debug 2026-03-10T11:54:22.546+0000 7f3739540640 -1 osd.2 149 *** Got signal Terminated *** 2026-03-10T11:54:22.839 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:54:22 vm05 bash[80388]: debug 2026-03-10T11:54:22.546+0000 7f3739540640 -1 osd.2 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:27.839 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:54:27 vm05 bash[112039]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-2 2026-03-10T11:54:28.339 INFO:journalctl@ceph.osd.2.vm05.stdout:Mar 10 11:54:27 vm05 bash[112104]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-2 2026-03-10T11:54:28.984 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.2.service' 2026-03-10T11:54:28.997 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:28.997 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-10T11:54:28.997 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-10T11:54:28.997 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.3 2026-03-10T11:54:29.339 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:54:29 vm05 systemd[1]: Stopping Ceph osd.3 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:29.339 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:54:29 vm05 bash[75861]: debug 2026-03-10T11:54:29.134+0000 7f4c80335640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:29.339 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:54:29 vm05 bash[75861]: debug 2026-03-10T11:54:29.134+0000 7f4c80335640 -1 osd.3 149 *** Got signal Terminated *** 2026-03-10T11:54:29.339 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:54:29 vm05 bash[75861]: debug 2026-03-10T11:54:29.134+0000 7f4c80335640 -1 osd.3 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:34.470 INFO:journalctl@ceph.osd.3.vm05.stdout:Mar 10 11:54:34 vm05 bash[112226]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-3 2026-03-10T11:54:34.519 DEBUG:teuthology.orchestra.run.vm05:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.3.service' 2026-03-10T11:54:34.528 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:34.529 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-10T11:54:34.529 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-10T11:54:34.529 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.4 2026-03-10T11:54:34.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:34 vm07 systemd[1]: Stopping Ceph osd.4 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:34.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:34 vm07 bash[54734]: debug 2026-03-10T11:54:34.566+0000 7f194379b640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:34.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:34 vm07 bash[54734]: debug 2026-03-10T11:54:34.566+0000 7f194379b640 -1 osd.4 149 *** Got signal Terminated *** 2026-03-10T11:54:34.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:34 vm07 bash[54734]: debug 2026-03-10T11:54:34.566+0000 7f194379b640 -1 osd.4 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:35.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:35 vm07 bash[64441]: debug 2026-03-10T11:54:35.590+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:36.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:36 vm07 bash[54734]: debug 2026-03-10T11:54:36.606+0000 7f193fdb4640 -1 osd.4 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.950628+0000 front 2026-03-10T11:54:10.950650+0000 (oldest deadline 2026-03-10T11:54:36.250198+0000) 2026-03-10T11:54:36.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:36 vm07 bash[64441]: debug 2026-03-10T11:54:36.546+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:37.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:37 vm07 bash[54734]: debug 2026-03-10T11:54:37.626+0000 7f193fdb4640 -1 osd.4 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.950628+0000 front 2026-03-10T11:54:10.950650+0000 (oldest deadline 2026-03-10T11:54:36.250198+0000) 2026-03-10T11:54:37.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:37 vm07 bash[69258]: debug 2026-03-10T11:54:37.754+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:37.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:37 vm07 bash[64441]: debug 2026-03-10T11:54:37.566+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:38.946 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:38 vm07 bash[54734]: debug 2026-03-10T11:54:38.618+0000 7f193fdb4640 -1 osd.4 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.950628+0000 front 2026-03-10T11:54:10.950650+0000 (oldest deadline 2026-03-10T11:54:36.250198+0000) 2026-03-10T11:54:38.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:38 vm07 bash[64441]: debug 2026-03-10T11:54:38.594+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:38.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:38 vm07 bash[69258]: debug 2026-03-10T11:54:38.786+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:39.798 INFO:journalctl@ceph.osd.4.vm07.stdout:Mar 10 11:54:39 vm07 bash[77194]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-4 2026-03-10T11:54:39.798 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:39 vm07 bash[64441]: debug 2026-03-10T11:54:39.614+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:39.798 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:39 vm07 bash[43274]: ts=2026-03-10T11:54:39.529Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:54:39.798 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:39 vm07 bash[43274]: ts=2026-03-10T11:54:39.529Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:54:39.798 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:39 vm07 bash[43274]: ts=2026-03-10T11:54:39.530Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:54:39.798 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:39 vm07 bash[43274]: ts=2026-03-10T11:54:39.530Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:54:39.798 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:39 vm07 bash[43274]: ts=2026-03-10T11:54:39.530Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:54:39.798 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:39 vm07 bash[43274]: ts=2026-03-10T11:54:39.530Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.105:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.105:8765: connect: connection refused" 2026-03-10T11:54:39.916 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.4.service' 2026-03-10T11:54:39.926 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:39.926 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-10T11:54:39.926 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-10T11:54:39.926 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.5 2026-03-10T11:54:40.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:39 vm07 bash[69258]: debug 2026-03-10T11:54:39.798+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:40.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:39 vm07 bash[59578]: debug 2026-03-10T11:54:39.794+0000 7f0e349d5640 -1 osd.5 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:13.548085+0000 front 2026-03-10T11:54:13.548020+0000 (oldest deadline 2026-03-10T11:54:39.447167+0000) 2026-03-10T11:54:40.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:39 vm07 systemd[1]: Stopping Ceph osd.5 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:40.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:40 vm07 bash[59578]: debug 2026-03-10T11:54:40.002+0000 7f0e383bc640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:40.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:40 vm07 bash[59578]: debug 2026-03-10T11:54:40.002+0000 7f0e383bc640 -1 osd.5 149 *** Got signal Terminated *** 2026-03-10T11:54:40.196 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:40 vm07 bash[59578]: debug 2026-03-10T11:54:40.002+0000 7f0e383bc640 -1 osd.5 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:40.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:40 vm07 bash[69258]: debug 2026-03-10T11:54:40.794+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:40.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:40 vm07 bash[59578]: debug 2026-03-10T11:54:40.758+0000 7f0e349d5640 -1 osd.5 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:13.548085+0000 front 2026-03-10T11:54:13.548020+0000 (oldest deadline 2026-03-10T11:54:39.447167+0000) 2026-03-10T11:54:40.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:40 vm07 bash[64441]: debug 2026-03-10T11:54:40.574+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:41.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:41 vm07 bash[69258]: debug 2026-03-10T11:54:41.774+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:41.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:41 vm07 bash[59578]: debug 2026-03-10T11:54:41.786+0000 7f0e349d5640 -1 osd.5 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:13.548085+0000 front 2026-03-10T11:54:13.548020+0000 (oldest deadline 2026-03-10T11:54:39.447167+0000) 2026-03-10T11:54:41.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:41 vm07 bash[64441]: debug 2026-03-10T11:54:41.602+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:42.947 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:42 vm07 bash[59578]: debug 2026-03-10T11:54:42.822+0000 7f0e349d5640 -1 osd.5 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:13.548085+0000 front 2026-03-10T11:54:13.548020+0000 (oldest deadline 2026-03-10T11:54:39.447167+0000) 2026-03-10T11:54:42.947 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:42 vm07 bash[64441]: debug 2026-03-10T11:54:42.586+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:42.947 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:42 vm07 bash[64441]: debug 2026-03-10T11:54:42.586+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:42.947 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:42 vm07 bash[69258]: debug 2026-03-10T11:54:42.770+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:42.947 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:42 vm07 bash[69258]: debug 2026-03-10T11:54:42.770+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:43.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:43 vm07 bash[69258]: debug 2026-03-10T11:54:43.738+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:43.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:43 vm07 bash[69258]: debug 2026-03-10T11:54:43.738+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:43.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:43 vm07 bash[59578]: debug 2026-03-10T11:54:43.854+0000 7f0e349d5640 -1 osd.5 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:13.548085+0000 front 2026-03-10T11:54:13.548020+0000 (oldest deadline 2026-03-10T11:54:39.447167+0000) 2026-03-10T11:54:43.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:43 vm07 bash[64441]: debug 2026-03-10T11:54:43.622+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:43.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:43 vm07 bash[64441]: debug 2026-03-10T11:54:43.622+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:44.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:44 vm07 bash[69258]: debug 2026-03-10T11:54:44.750+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:44.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:44 vm07 bash[69258]: debug 2026-03-10T11:54:44.750+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:44.946 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:44 vm07 bash[59578]: debug 2026-03-10T11:54:44.842+0000 7f0e349d5640 -1 osd.5 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:13.548085+0000 front 2026-03-10T11:54:13.548020+0000 (oldest deadline 2026-03-10T11:54:39.447167+0000) 2026-03-10T11:54:44.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:44 vm07 bash[64441]: debug 2026-03-10T11:54:44.654+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:44.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:44 vm07 bash[64441]: debug 2026-03-10T11:54:44.654+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:45.331 INFO:journalctl@ceph.osd.5.vm07.stdout:Mar 10 11:54:45 vm07 bash[77372]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-5 2026-03-10T11:54:45.382 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.5.service' 2026-03-10T11:54:45.394 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:45.394 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-10T11:54:45.394 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-10T11:54:45.394 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.6 2026-03-10T11:54:45.625 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:45 vm07 systemd[1]: Stopping Ceph osd.6 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:45.625 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:45 vm07 bash[64441]: debug 2026-03-10T11:54:45.478+0000 7f86d94c9640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:45.625 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:45 vm07 bash[64441]: debug 2026-03-10T11:54:45.478+0000 7f86d94c9640 -1 osd.6 149 *** Got signal Terminated *** 2026-03-10T11:54:45.625 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:45 vm07 bash[64441]: debug 2026-03-10T11:54:45.478+0000 7f86d94c9640 -1 osd.6 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:45.696 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:45 vm07 bash[64441]: debug 2026-03-10T11:54:45.622+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:45.696 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:45 vm07 bash[64441]: debug 2026-03-10T11:54:45.622+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:45.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:45 vm07 bash[69258]: debug 2026-03-10T11:54:45.794+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:45.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:45 vm07 bash[69258]: debug 2026-03-10T11:54:45.794+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:46.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:46 vm07 bash[69258]: debug 2026-03-10T11:54:46.826+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:46.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:46 vm07 bash[69258]: debug 2026-03-10T11:54:46.826+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:46.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:46 vm07 bash[64441]: debug 2026-03-10T11:54:46.630+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:46.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:46 vm07 bash[64441]: debug 2026-03-10T11:54:46.630+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:47.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:47 vm07 bash[69258]: debug 2026-03-10T11:54:47.850+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:47.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:47 vm07 bash[69258]: debug 2026-03-10T11:54:47.850+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:47.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:47 vm07 bash[64441]: debug 2026-03-10T11:54:47.674+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:47.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:47 vm07 bash[64441]: debug 2026-03-10T11:54:47.674+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:49.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:48 vm07 bash[64441]: debug 2026-03-10T11:54:48.710+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:49.196 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:48 vm07 bash[64441]: debug 2026-03-10T11:54:48.710+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:49.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:48 vm07 bash[69258]: debug 2026-03-10T11:54:48.870+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:49.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:48 vm07 bash[69258]: debug 2026-03-10T11:54:48.870+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:49.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:49 vm07 bash[69258]: debug 2026-03-10T11:54:49.826+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:49.946 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:49 vm07 bash[69258]: debug 2026-03-10T11:54:49.826+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:49.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:49 vm07 bash[64441]: debug 2026-03-10T11:54:49.686+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:10.662863+0000 front 2026-03-10T11:54:10.663058+0000 (oldest deadline 2026-03-10T11:54:34.762601+0000) 2026-03-10T11:54:49.946 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:49 vm07 bash[64441]: debug 2026-03-10T11:54:49.686+0000 7f86d52e1640 -1 osd.6 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:18.263154+0000 front 2026-03-10T11:54:18.263288+0000 (oldest deadline 2026-03-10T11:54:41.763115+0000) 2026-03-10T11:54:50.790 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:50 vm07 bash[77547]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-6 2026-03-10T11:54:50.790 INFO:journalctl@ceph.osd.6.vm07.stdout:Mar 10 11:54:50 vm07 bash[77607]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-6 2026-03-10T11:54:51.050 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:50 vm07 bash[69258]: debug 2026-03-10T11:54:50.790+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:51.050 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:50 vm07 bash[69258]: debug 2026-03-10T11:54:50.790+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:52.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:51 vm07 bash[69258]: debug 2026-03-10T11:54:51.766+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:52.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:51 vm07 bash[69258]: debug 2026-03-10T11:54:51.766+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:52.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:51 vm07 bash[69258]: debug 2026-03-10T11:54:51.766+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6822 osd.2 since back 2026-03-10T11:54:26.593722+0000 front 2026-03-10T11:54:26.593727+0000 (oldest deadline 2026-03-10T11:54:51.293502+0000) 2026-03-10T11:54:52.221 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.6.service' 2026-03-10T11:54:52.232 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:52.234 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-10T11:54:52.234 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-10T11:54:52.234 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.7 2026-03-10T11:54:52.697 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 systemd[1]: Stopping Ceph osd.7 for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:52.697 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 bash[69258]: debug 2026-03-10T11:54:52.386+0000 7f5cc4352640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-10T11:54:52.697 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 bash[69258]: debug 2026-03-10T11:54:52.386+0000 7f5cc4352640 -1 osd.7 149 *** Got signal Terminated *** 2026-03-10T11:54:52.697 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 bash[69258]: debug 2026-03-10T11:54:52.386+0000 7f5cc4352640 -1 osd.7 149 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-10T11:54:53.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 bash[69258]: debug 2026-03-10T11:54:52.794+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:53.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 bash[69258]: debug 2026-03-10T11:54:52.794+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:53.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:52 vm07 bash[69258]: debug 2026-03-10T11:54:52.794+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6822 osd.2 since back 2026-03-10T11:54:26.593722+0000 front 2026-03-10T11:54:26.593727+0000 (oldest deadline 2026-03-10T11:54:51.293502+0000) 2026-03-10T11:54:54.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:53 vm07 bash[69258]: debug 2026-03-10T11:54:53.746+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:54.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:53 vm07 bash[69258]: debug 2026-03-10T11:54:53.746+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:54.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:53 vm07 bash[69258]: debug 2026-03-10T11:54:53.746+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6822 osd.2 since back 2026-03-10T11:54:26.593722+0000 front 2026-03-10T11:54:26.593727+0000 (oldest deadline 2026-03-10T11:54:51.293502+0000) 2026-03-10T11:54:55.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:54 vm07 bash[69258]: debug 2026-03-10T11:54:54.722+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:55.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:54 vm07 bash[69258]: debug 2026-03-10T11:54:54.722+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:55.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:54 vm07 bash[69258]: debug 2026-03-10T11:54:54.722+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6822 osd.2 since back 2026-03-10T11:54:26.593722+0000 front 2026-03-10T11:54:26.593727+0000 (oldest deadline 2026-03-10T11:54:51.293502+0000) 2026-03-10T11:54:56.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:55 vm07 bash[69258]: debug 2026-03-10T11:54:55.730+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:56.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:55 vm07 bash[69258]: debug 2026-03-10T11:54:55.730+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:56.196 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:55 vm07 bash[69258]: debug 2026-03-10T11:54:55.730+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6822 osd.2 since back 2026-03-10T11:54:26.593722+0000 front 2026-03-10T11:54:26.593727+0000 (oldest deadline 2026-03-10T11:54:51.293502+0000) 2026-03-10T11:54:57.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:56 vm07 bash[69258]: debug 2026-03-10T11:54:56.698+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6806 osd.0 since back 2026-03-10T11:54:12.092649+0000 front 2026-03-10T11:54:12.092467+0000 (oldest deadline 2026-03-10T11:54:36.792265+0000) 2026-03-10T11:54:57.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:56 vm07 bash[69258]: debug 2026-03-10T11:54:56.698+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6814 osd.1 since back 2026-03-10T11:54:20.293291+0000 front 2026-03-10T11:54:20.292952+0000 (oldest deadline 2026-03-10T11:54:42.592769+0000) 2026-03-10T11:54:57.197 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:56 vm07 bash[69258]: debug 2026-03-10T11:54:56.698+0000 7f5cc016a640 -1 osd.7 149 heartbeat_check: no reply from 192.168.123.105:6822 osd.2 since back 2026-03-10T11:54:26.593722+0000 front 2026-03-10T11:54:26.593727+0000 (oldest deadline 2026-03-10T11:54:51.293502+0000) 2026-03-10T11:54:57.682 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:57 vm07 bash[77728]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-7 2026-03-10T11:54:57.682 INFO:journalctl@ceph.osd.7.vm07.stdout:Mar 10 11:54:57 vm07 bash[77797]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-osd-7 2026-03-10T11:54:58.487 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@osd.7.service' 2026-03-10T11:54:58.497 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:58.497 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-10T11:54:58.498 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-10T11:54:58.498 DEBUG:teuthology.orchestra.run.vm07:> sudo systemctl stop ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a 2026-03-10T11:54:58.666 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 systemd[1]: Stopping Ceph prometheus.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.666Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.693Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.693Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-10T11:54:58.793 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[43274]: ts=2026-03-10T11:54:58.693Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-10T11:54:58.947 INFO:journalctl@ceph.prometheus.a.vm07.stdout:Mar 10 11:54:58 vm07 bash[77916]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-prometheus-a 2026-03-10T11:54:58.962 DEBUG:teuthology.orchestra.run.vm07:> sudo pkill -f 'journalctl -f -n 0 -u ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@prometheus.a.service' 2026-03-10T11:54:58.989 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-10T11:54:58.989 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-10T11:54:58.989 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force --keep-logs 2026-03-10T11:55:01.934 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:01 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:01.934 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:01 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:02.184 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:01 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:02.184 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:01 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:02.184 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 systemd[1]: Stopping Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:55:02.184 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 bash[50896]: ts=2026-03-10T11:55:02.133Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-10T11:55:02.589 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 bash[112497]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-alertmanager-a 2026-03-10T11:55:02.976 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:02 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:02.977 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:02 vm05 systemd[1]: Stopping Ceph node-exporter.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:55:02.977 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 bash[112551]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-alertmanager-a 2026-03-10T11:55:02.977 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@alertmanager.a.service: Deactivated successfully. 2026-03-10T11:55:02.977 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 systemd[1]: Stopped Ceph alertmanager.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:55:02.977 INFO:journalctl@ceph.alertmanager.a.vm05.stdout:Mar 10 11:55:02 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:03.281 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:03 vm05 bash[112616]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-node-exporter-a 2026-03-10T11:55:03.534 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:03 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:55:03.534 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:03 vm05 bash[112670]: Error response from daemon: No such container: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-node-exporter-a 2026-03-10T11:55:03.534 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:03 vm05 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-10T11:55:03.534 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:03 vm05 systemd[1]: Stopped Ceph node-exporter.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:55:03.534 INFO:journalctl@ceph.node-exporter.a.vm05.stdout:Mar 10 11:55:03 vm05 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:25.796 INFO:teuthology.orchestra.run.vm05.stderr:Traceback (most recent call last): 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr: main() 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr: r = ctx.func(ctx) 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr: with open(files[0]) as f: 2026-03-10T11:55:25.797 INFO:teuthology.orchestra.run.vm05.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-10T11:55:25.809 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:55:25.809 DEBUG:teuthology.orchestra.run.vm07:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force --keep-logs 2026-03-10T11:55:28.696 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:28.696 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:29.007 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:29.007 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:29.007 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:29.007 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:28 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:29.446 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:29.446 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:29 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:39.520 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:39.520 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: Stopping Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 bash[44829]: logger=server t=2026-03-10T11:55:39.724402961Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 bash[44829]: logger=ticker t=2026-03-10T11:55:39.724458676Z level=info msg=stopped last_tick=2026-03-10T11:55:30Z 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 bash[44829]: logger=tracing t=2026-03-10T11:55:39.724625348Z level=info msg="Closing tracing" 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 bash[44829]: logger=grafana-apiserver t=2026-03-10T11:55:39.724849878Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-10T11:55:39.782 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 bash[78316]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-grafana-a 2026-03-10T11:55:39.782 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:40.082 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@grafana.a.service: Deactivated successfully. 2026-03-10T11:55:40.082 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: Stopped Ceph grafana.a for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:55:40.082 INFO:journalctl@ceph.grafana.a.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:40.082 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:39 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:40.389 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:40.389 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 systemd[1]: Stopping Ceph node-exporter.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d... 2026-03-10T11:55:40.389 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 bash[78470]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d-node-exporter-b 2026-03-10T11:55:40.389 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-10T11:55:40.389 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 systemd[1]: ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-10T11:55:40.389 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 systemd[1]: Stopped Ceph node-exporter.b for 72041074-1c73-11f1-8607-4fca9a5e0a4d. 2026-03-10T11:55:40.696 INFO:journalctl@ceph.node-exporter.b.vm07.stdout:Mar 10 11:55:40 vm07 systemd[1]: /etc/systemd/system/ceph-72041074-1c73-11f1-8607-4fca9a5e0a4d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-10T11:55:40.821 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:55:40.828 INFO:teuthology.orchestra.run.vm05.stderr:rm: cannot remove '/etc/ceph/ceph.conf': Is a directory 2026-03-10T11:55:40.828 INFO:teuthology.orchestra.run.vm05.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-10T11:55:40.828 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:55:40.828 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-10T11:55:40.835 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-10T11:55:40.835 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014/remote/vm05/crash 2026-03-10T11:55:40.835 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/crash -- . 2026-03-10T11:55:40.875 INFO:teuthology.orchestra.run.vm05.stderr:tar: /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/crash: Cannot open: No such file or directory 2026-03-10T11:55:40.875 INFO:teuthology.orchestra.run.vm05.stderr:tar: Error is not recoverable: exiting now 2026-03-10T11:55:40.876 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/crash to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014/remote/vm07/crash 2026-03-10T11:55:40.876 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/crash -- . 2026-03-10T11:55:40.882 INFO:teuthology.orchestra.run.vm07.stderr:tar: /var/lib/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/crash: Cannot open: No such file or directory 2026-03-10T11:55:40.882 INFO:teuthology.orchestra.run.vm07.stderr:tar: Error is not recoverable: exiting now 2026-03-10T11:55:40.883 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-10T11:55:40.883 DEBUG:teuthology.orchestra.run.vm05:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | egrep -v CEPHADM_AGENT_DOWN | head -n 1 2026-03-10T11:55:40.926 INFO:tasks.cephadm:Compressing logs... 2026-03-10T11:55:40.926 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:55:40.970 DEBUG:teuthology.orchestra.run.vm07:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:55:40.975 INFO:teuthology.orchestra.run.vm05.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T11:55:40.975 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T11:55:40.975 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.3.log 2026-03-10T11:55:40.976 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log 2026-03-10T11:55:40.977 INFO:teuthology.orchestra.run.vm07.stderr:find: gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-10T11:55:40.977 INFO:teuthology.orchestra.run.vm07.stderr:‘/var/log/rbd-target-api’: No such file or directory 2026-03-10T11:55:40.977 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mgr.x.log 2026-03-10T11:55:40.977 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log 2026-03-10T11:55:40.980 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.3.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.c.log 2026-03-10T11:55:40.984 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log: 93.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log.gz 2026-03-10T11:55:40.984 INFO:teuthology.orchestra.run.vm05.stderr: 91.2% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T11:55:40.984 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.1.log 2026-03-10T11:55:40.984 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-client.rgw.foo.vm05.fdjkgz.log 2026-03-10T11:55:40.987 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-client.rgw.foo.vm07.mbukmh.log 2026-03-10T11:55:40.989 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log: 88.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.log.gz 2026-03-10T11:55:40.990 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.b.log 2026-03-10T11:55:40.992 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-client.rgw.foo.vm07.mbukmh.log: 75.6% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-client.rgw.foo.vm07.mbukmh.log.gz 2026-03-10T11:55:40.993 INFO:teuthology.orchestra.run.vm07.stderr: 90.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-10T11:55:40.993 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.5.log 2026-03-10T11:55:41.005 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mgr.y.log 2026-03-10T11:55:41.006 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-client.rgw.foo.vm05.fdjkgz.log: 75.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-client.rgw.foo.vm05.fdjkgz.log.gz 2026-03-10T11:55:41.007 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.b.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.7.log 2026-03-10T11:55:41.013 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.a.log 2026-03-10T11:55:41.021 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.2.log 2026-03-10T11:55:41.027 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.6.log 2026-03-10T11:55:41.029 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.audit.log 2026-03-10T11:55:41.033 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.a.log: /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-volume.log 2026-03-10T11:55:41.037 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.cephadm.log 2026-03-10T11:55:41.043 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.audit.log 2026-03-10T11:55:41.047 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-volume.log: 94.4%gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/tcmu-runner.log 2026-03-10T11:55:41.051 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.6.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-volume.log 2026-03-10T11:55:41.053 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.cephadm.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.0.log 2026-03-10T11:55:41.053 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/tcmu-runner.log: 85.0% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/tcmu-runner.log.gz 2026-03-10T11:55:41.054 INFO:teuthology.orchestra.run.vm05.stderr: 91.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.cephadm.log.gz 2026-03-10T11:55:41.057 INFO:teuthology.orchestra.run.vm05.stderr: -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.audit.log.gz 2026-03-10T11:55:41.059 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.audit.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.cephadm.log 2026-03-10T11:55:41.061 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-volume.log: gzip -5 --verbose -- /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.4.log 2026-03-10T11:55:41.063 INFO:teuthology.orchestra.run.vm07.stderr: 91.1% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.audit.log.gz 2026-03-10T11:55:41.068 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.cephadm.log: 85.6% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph.cephadm.log.gz 2026-03-10T11:55:41.079 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.0.log: 94.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-volume.log.gz 2026-03-10T11:55:41.108 INFO:teuthology.orchestra.run.vm07.stderr:/var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.4.log: 94.7% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-volume.log.gz 2026-03-10T11:55:41.127 INFO:teuthology.orchestra.run.vm07.stderr: 90.4% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mgr.x.log.gz 2026-03-10T11:55:41.605 INFO:teuthology.orchestra.run.vm05.stderr: 90.0% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mgr.y.log.gz 2026-03-10T11:55:41.803 INFO:teuthology.orchestra.run.vm07.stderr: 92.7% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.b.log.gz 2026-03-10T11:55:42.039 INFO:teuthology.orchestra.run.vm05.stderr: 92.7% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.c.log.gz 2026-03-10T11:55:42.815 INFO:teuthology.orchestra.run.vm07.stderr: 93.5% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.6.log.gz 2026-03-10T11:55:42.833 INFO:teuthology.orchestra.run.vm05.stderr: 93.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.2.log.gz 2026-03-10T11:55:43.009 INFO:teuthology.orchestra.run.vm07.stderr: 94.2% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.7.log.gz 2026-03-10T11:55:43.045 INFO:teuthology.orchestra.run.vm07.stderr: 93.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.5.log.gz 2026-03-10T11:55:43.105 INFO:teuthology.orchestra.run.vm05.stderr: 91.3% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-mon.a.log.gz 2026-03-10T11:55:43.145 INFO:teuthology.orchestra.run.vm07.stderr: 94.0% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.4.log.gz 2026-03-10T11:55:43.146 INFO:teuthology.orchestra.run.vm07.stderr: 2026-03-10T11:55:43.146 INFO:teuthology.orchestra.run.vm07.stderr:real 0m2.174s 2026-03-10T11:55:43.146 INFO:teuthology.orchestra.run.vm07.stderr:user 0m4.004s 2026-03-10T11:55:43.146 INFO:teuthology.orchestra.run.vm07.stderr:sys 0m0.238s 2026-03-10T11:55:43.348 INFO:teuthology.orchestra.run.vm05.stderr: 93.9% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.1.log.gz 2026-03-10T11:55:43.367 INFO:teuthology.orchestra.run.vm05.stderr: 93.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.0.log.gz 2026-03-10T11:55:43.547 INFO:teuthology.orchestra.run.vm05.stderr: 93.8% -- replaced with /var/log/ceph/72041074-1c73-11f1-8607-4fca9a5e0a4d/ceph-osd.3.log.gz 2026-03-10T11:55:43.548 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-10T11:55:43.548 INFO:teuthology.orchestra.run.vm05.stderr:real 0m2.578s 2026-03-10T11:55:43.549 INFO:teuthology.orchestra.run.vm05.stderr:user 0m4.713s 2026-03-10T11:55:43.549 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m0.255s 2026-03-10T11:55:43.549 INFO:tasks.cephadm:Archiving logs... 2026-03-10T11:55:43.549 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014/remote/vm05/log 2026-03-10T11:55:43.549 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T11:55:43.797 DEBUG:teuthology.misc:Transferring archived files from vm07:/var/log/ceph to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014/remote/vm07/log 2026-03-10T11:55:43.797 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-10T11:55:43.984 INFO:tasks.cephadm:Removing cluster... 2026-03-10T11:55:43.985 DEBUG:teuthology.orchestra.run.vm05:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force 2026-03-10T11:55:44.605 INFO:teuthology.orchestra.run.vm05.stderr:Traceback (most recent call last): 2026-03-10T11:55:44.605 INFO:teuthology.orchestra.run.vm05.stderr: File "/home/ubuntu/cephtest/cephadm", line 8634, in 2026-03-10T11:55:44.605 INFO:teuthology.orchestra.run.vm05.stderr: main() 2026-03-10T11:55:44.605 INFO:teuthology.orchestra.run.vm05.stderr: File "/home/ubuntu/cephtest/cephadm", line 8622, in main 2026-03-10T11:55:44.606 INFO:teuthology.orchestra.run.vm05.stderr: r = ctx.func(ctx) 2026-03-10T11:55:44.606 INFO:teuthology.orchestra.run.vm05.stderr: File "/home/ubuntu/cephtest/cephadm", line 6538, in command_rm_cluster 2026-03-10T11:55:44.606 INFO:teuthology.orchestra.run.vm05.stderr: with open(files[0]) as f: 2026-03-10T11:55:44.606 INFO:teuthology.orchestra.run.vm05.stderr:IsADirectoryError: [Errno 21] Is a directory: '/etc/ceph/ceph.conf' 2026-03-10T11:55:44.618 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:55:44.618 INFO:tasks.cephadm:Teardown complete 2026-03-10T11:55:44.618 ERROR:teuthology.run_tasks:Manager failed: cephadm Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm05 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force' 2026-03-10T11:55:44.619 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-10T11:55:44.621 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-10T11:55:44.621 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:55:44.622 DEBUG:teuthology.orchestra.run.vm07:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:============================================================================== 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:-vps-ber1.orlean 127.65.222.189 2 u 95 128 377 28.758 +0.364 0.925 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:*vps-fra8.orlean 195.145.119.188 2 u 87 128 377 32.420 -1.175 0.604 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:+185.13.148.71 79.133.44.146 2 u 94 128 377 31.873 -1.122 0.636 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:+ntp5.kernfusion 237.17.204.95 2 u 88 128 377 29.010 -1.316 0.721 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:-server2.as2.ch 189.97.54.122 2 u 86 128 377 25.026 -0.401 0.890 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:-185.125.190.57 194.121.207.249 2 u 6 128 377 35.231 -1.473 0.788 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:-stage3.opensuse 127.51.226.51 3 u 92 128 377 25.012 -1.939 0.828 2026-03-10T11:55:44.852 INFO:teuthology.orchestra.run.vm07.stdout:-185.125.190.56 79.243.60.50 2 u 76 128 377 35.186 -2.456 0.610 2026-03-10T11:55:46.580 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:*vps-ber1.orlean 127.65.222.189 2 u 34 128 377 28.769 -3.488 1.497 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:+vps-fra8.orlean 195.145.119.188 2 u 38 128 377 30.497 -4.065 1.540 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:-ntp5.kernfusion 237.17.204.95 2 u 36 128 377 28.957 -2.302 1.595 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:+stage3.opensuse 127.51.226.51 3 u 36 128 377 24.973 -4.012 1.189 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:-vps-nue1.orlean 195.145.119.188 2 u 40 128 377 28.335 -5.500 1.213 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:-server2.as2.ch 189.97.54.122 2 u 30 128 377 25.025 -3.045 1.357 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:-185.13.148.71 79.133.44.146 2 u 105 128 377 32.044 -2.084 1.530 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:+185.125.190.56 79.243.60.50 2 u 72 128 377 35.303 -3.480 1.489 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:-185.125.190.58 145.238.80.80 2 u 70 128 377 33.394 -2.333 3.021 2026-03-10T11:55:46.581 INFO:teuthology.orchestra.run.vm05.stdout:+185.125.190.57 194.121.207.249 2 u 87 128 377 35.234 -3.255 1.290 2026-03-10T11:55:46.581 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-10T11:55:46.583 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-10T11:55:46.583 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-10T11:55:46.585 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-10T11:55:46.587 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-10T11:55:46.589 INFO:teuthology.task.internal:Duration was 2144.870641 seconds 2026-03-10T11:55:46.589 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-10T11:55:46.591 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-10T11:55:46.591 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T11:55:46.592 DEBUG:teuthology.orchestra.run.vm07:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-10T11:55:46.618 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-10T11:55:46.618 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-10T11:55:46.618 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T11:55:46.666 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm07.local 2026-03-10T11:55:46.666 DEBUG:teuthology.orchestra.run.vm07:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-10T11:55:46.676 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-10T11:55:46.676 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:55:46.710 DEBUG:teuthology.orchestra.run.vm07:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:55:46.834 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-10T11:55:46.834 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:55:46.835 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-10T11:55:46.841 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:55:46.841 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:55:46.841 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T11:55:46.841 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:55:46.841 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-10T11:55:46.842 INFO:teuthology.orchestra.run.vm07.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-10T11:55:46.842 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-10T11:55:46.842 INFO:teuthology.orchestra.run.vm07.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-10T11:55:46.842 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T11:55:46.842 INFO:teuthology.orchestra.run.vm07.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0%/home/ubuntu/cephtest/archive/syslog/journalctl.log: -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-10T11:55:46.860 INFO:teuthology.orchestra.run.vm07.stderr: 90.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T11:55:46.867 INFO:teuthology.orchestra.run.vm05.stderr: 91.7% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-10T11:55:46.868 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-10T11:55:46.870 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-10T11:55:46.870 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T11:55:46.916 DEBUG:teuthology.orchestra.run.vm07:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-10T11:55:46.923 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-10T11:55:46.925 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:55:46.958 DEBUG:teuthology.orchestra.run.vm07:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:55:46.963 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-10T11:55:46.970 INFO:teuthology.orchestra.run.vm07.stdout:kernel.core_pattern = core 2026-03-10T11:55:46.977 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:55:47.015 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:55:47.016 DEBUG:teuthology.orchestra.run.vm07:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-10T11:55:47.021 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:55:47.021 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-10T11:55:47.024 INFO:teuthology.task.internal:Transferring archived files... 2026-03-10T11:55:47.024 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014/remote/vm05 2026-03-10T11:55:47.024 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T11:55:47.067 DEBUG:teuthology.misc:Transferring archived files from vm07:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-10_01:00:38-orch-squid-none-default-vps/1014/remote/vm07 2026-03-10T11:55:47.067 DEBUG:teuthology.orchestra.run.vm07:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-10T11:55:47.075 INFO:teuthology.task.internal:Removing archive directory... 2026-03-10T11:55:47.075 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T11:55:47.110 DEBUG:teuthology.orchestra.run.vm07:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-10T11:55:47.118 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-10T11:55:47.120 INFO:teuthology.task.internal:Not uploading archives. 2026-03-10T11:55:47.120 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-10T11:55:47.123 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-10T11:55:47.123 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T11:55:47.154 DEBUG:teuthology.orchestra.run.vm07:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-10T11:55:47.156 INFO:teuthology.orchestra.run.vm05.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 11:55 /home/ubuntu/cephtest 2026-03-10T11:55:47.156 INFO:teuthology.orchestra.run.vm05.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 10 11:22 /home/ubuntu/cephtest/cephadm 2026-03-10T11:55:47.157 INFO:teuthology.orchestra.run.vm05.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T11:55:47.160 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-10T11:55:47.160 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 48, in base yield File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 2216, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 1845, in initialize_config yield File "/home/teuthos/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 229, in download_cephadm _rm_cluster(ctx, cluster_name) File "/home/teuthos/src/github.com_kshtsk_ceph_75a68fd8ca3f918fe9466b4c0bb385b7fc260a9b/qa/tasks/cephadm.py", line 383, in _rm_cluster remote.run(args=[ File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm05 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm05 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-10T11:55:47.160 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-10T11:55:47.163 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm05 with status 1: 'sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force' 2026-03-10T11:55:47.164 INFO:teuthology.run:Summary data: description: orch/cephadm/upgrade/{1-start-distro/1-start-ubuntu_22.04 2-repo_digest/defaut 3-upgrade/staggered 4-wait 5-upgrade-ls agent/off mon_election/connectivity} duration: 2144.8706414699554 failure_reason: 'Command failed on vm05 with status 1: ''sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid 72041074-1c73-11f1-8607-4fca9a5e0a4d --force''' owner: kyr status: fail success: false 2026-03-10T11:55:47.164 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-10T11:55:47.165 INFO:teuthology.orchestra.run.vm07.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 10 11:55 /home/ubuntu/cephtest 2026-03-10T11:55:47.165 INFO:teuthology.orchestra.run.vm07.stdout: 258199 316 -rwxrwxr-x 1 ubuntu ubuntu 320521 Mar 10 11:22 /home/ubuntu/cephtest/cephadm 2026-03-10T11:55:47.165 INFO:teuthology.orchestra.run.vm07.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-10T11:55:47.187 INFO:teuthology.run:FAIL